00:00:00.000 Started by upstream project "autotest-per-patch" build number 126201 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.119 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.120 The recommended git tool is: git 00:00:00.120 using credential 00000000-0000-0000-0000-000000000002 00:00:00.122 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.180 Fetching changes from the remote Git repository 00:00:00.187 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.242 Using shallow fetch with depth 1 00:00:00.242 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.242 > git --version # timeout=10 00:00:00.278 > git --version # 'git version 2.39.2' 00:00:00.278 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.295 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.295 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.777 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.789 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.802 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:05.802 > git config core.sparsecheckout # timeout=10 00:00:05.812 > git read-tree -mu HEAD # timeout=10 00:00:05.829 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:05.850 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:05.850 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:05.961 [Pipeline] Start of Pipeline 00:00:05.978 [Pipeline] library 00:00:05.980 Loading library shm_lib@master 00:00:05.980 Library shm_lib@master is cached. Copying from home. 00:00:06.001 [Pipeline] node 00:00:06.010 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.011 [Pipeline] { 00:00:06.024 [Pipeline] catchError 00:00:06.026 [Pipeline] { 00:00:06.039 [Pipeline] wrap 00:00:06.050 [Pipeline] { 00:00:06.060 [Pipeline] stage 00:00:06.061 [Pipeline] { (Prologue) 00:00:06.243 [Pipeline] sh 00:00:06.527 + logger -p user.info -t JENKINS-CI 00:00:06.545 [Pipeline] echo 00:00:06.546 Node: GP6 00:00:06.554 [Pipeline] sh 00:00:06.851 [Pipeline] setCustomBuildProperty 00:00:06.866 [Pipeline] echo 00:00:06.868 Cleanup processes 00:00:06.874 [Pipeline] sh 00:00:07.154 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.154 573087 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.169 [Pipeline] sh 00:00:07.457 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.457 ++ grep -v 'sudo pgrep' 00:00:07.457 ++ awk '{print $1}' 00:00:07.457 + sudo kill -9 00:00:07.457 + true 00:00:07.470 [Pipeline] cleanWs 00:00:07.478 [WS-CLEANUP] Deleting project workspace... 00:00:07.478 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.485 [WS-CLEANUP] done 00:00:07.488 [Pipeline] setCustomBuildProperty 00:00:07.499 [Pipeline] sh 00:00:07.796 + sudo git config --global --replace-all safe.directory '*' 00:00:07.919 [Pipeline] httpRequest 00:00:07.957 [Pipeline] echo 00:00:07.959 Sorcerer 10.211.164.101 is alive 00:00:07.966 [Pipeline] httpRequest 00:00:07.971 HttpMethod: GET 00:00:07.971 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.972 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.976 Response Code: HTTP/1.1 200 OK 00:00:07.977 Success: Status code 200 is in the accepted range: 200,404 00:00:07.977 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.957 [Pipeline] sh 00:00:09.244 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:09.260 [Pipeline] httpRequest 00:00:09.276 [Pipeline] echo 00:00:09.277 Sorcerer 10.211.164.101 is alive 00:00:09.285 [Pipeline] httpRequest 00:00:09.289 HttpMethod: GET 00:00:09.289 URL: http://10.211.164.101/packages/spdk_255871c197f0409b3ebd7e3c2323a8e265443306.tar.gz 00:00:09.290 Sending request to url: http://10.211.164.101/packages/spdk_255871c197f0409b3ebd7e3c2323a8e265443306.tar.gz 00:00:09.305 Response Code: HTTP/1.1 200 OK 00:00:09.305 Success: Status code 200 is in the accepted range: 200,404 00:00:09.306 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_255871c197f0409b3ebd7e3c2323a8e265443306.tar.gz 00:00:45.432 [Pipeline] sh 00:00:45.714 + tar --no-same-owner -xf spdk_255871c197f0409b3ebd7e3c2323a8e265443306.tar.gz 00:00:49.038 [Pipeline] sh 00:00:49.323 + git -C spdk log --oneline -n5 00:00:49.323 255871c19 autopackage: Move core of the script to autobuild 00:00:49.323 bd4841ef7 autopackage: Replace SPDK_TEST_RELEASE_BUILD with SPDK_TEST_PACKAGING 00:00:49.323 719d03c6a sock/uring: only register net impl if supported 00:00:49.323 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:49.323 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:49.333 [Pipeline] } 00:00:49.349 [Pipeline] // stage 00:00:49.357 [Pipeline] stage 00:00:49.359 [Pipeline] { (Prepare) 00:00:49.377 [Pipeline] writeFile 00:00:49.393 [Pipeline] sh 00:00:49.680 + logger -p user.info -t JENKINS-CI 00:00:49.694 [Pipeline] sh 00:00:49.979 + logger -p user.info -t JENKINS-CI 00:00:49.992 [Pipeline] sh 00:00:50.277 + cat autorun-spdk.conf 00:00:50.277 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.277 SPDK_TEST_NVMF=1 00:00:50.277 SPDK_TEST_NVME_CLI=1 00:00:50.277 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.277 SPDK_TEST_NVMF_NICS=e810 00:00:50.277 SPDK_TEST_VFIOUSER=1 00:00:50.277 SPDK_RUN_UBSAN=1 00:00:50.277 NET_TYPE=phy 00:00:50.285 RUN_NIGHTLY=0 00:00:50.291 [Pipeline] readFile 00:00:50.320 [Pipeline] withEnv 00:00:50.322 [Pipeline] { 00:00:50.335 [Pipeline] sh 00:00:50.618 + set -ex 00:00:50.618 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:50.618 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:50.618 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.618 ++ SPDK_TEST_NVMF=1 00:00:50.618 ++ SPDK_TEST_NVME_CLI=1 00:00:50.618 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.618 ++ SPDK_TEST_NVMF_NICS=e810 00:00:50.618 ++ SPDK_TEST_VFIOUSER=1 00:00:50.618 ++ SPDK_RUN_UBSAN=1 00:00:50.618 ++ NET_TYPE=phy 00:00:50.618 ++ RUN_NIGHTLY=0 00:00:50.618 + case $SPDK_TEST_NVMF_NICS in 00:00:50.618 + DRIVERS=ice 00:00:50.618 + [[ tcp == \r\d\m\a ]] 00:00:50.618 + [[ -n ice ]] 00:00:50.619 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:50.619 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:50.619 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:50.619 rmmod: ERROR: Module irdma is not currently loaded 00:00:50.619 rmmod: ERROR: Module i40iw is not currently loaded 00:00:50.619 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:50.619 + true 00:00:50.619 + for D in $DRIVERS 00:00:50.619 + sudo modprobe ice 00:00:50.619 + exit 0 00:00:50.629 [Pipeline] } 00:00:50.649 [Pipeline] // withEnv 00:00:50.655 [Pipeline] } 00:00:50.672 [Pipeline] // stage 00:00:50.683 [Pipeline] catchError 00:00:50.684 [Pipeline] { 00:00:50.700 [Pipeline] timeout 00:00:50.700 Timeout set to expire in 50 min 00:00:50.702 [Pipeline] { 00:00:50.718 [Pipeline] stage 00:00:50.720 [Pipeline] { (Tests) 00:00:50.733 [Pipeline] sh 00:00:51.018 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:51.018 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:51.018 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:51.018 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:51.018 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:51.018 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:51.018 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:51.018 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:51.018 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:51.018 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:51.018 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:51.018 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:51.018 + source /etc/os-release 00:00:51.018 ++ NAME='Fedora Linux' 00:00:51.018 ++ VERSION='38 (Cloud Edition)' 00:00:51.018 ++ ID=fedora 00:00:51.018 ++ VERSION_ID=38 00:00:51.018 ++ VERSION_CODENAME= 00:00:51.018 ++ PLATFORM_ID=platform:f38 00:00:51.019 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:51.019 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:51.019 ++ LOGO=fedora-logo-icon 00:00:51.019 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:51.019 ++ HOME_URL=https://fedoraproject.org/ 00:00:51.019 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:51.019 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:51.019 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:51.019 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:51.019 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:51.019 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:51.019 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:51.019 ++ SUPPORT_END=2024-05-14 00:00:51.019 ++ VARIANT='Cloud Edition' 00:00:51.019 ++ VARIANT_ID=cloud 00:00:51.019 + uname -a 00:00:51.019 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:51.019 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:51.955 Hugepages 00:00:51.955 node hugesize free / total 00:00:51.955 node0 1048576kB 0 / 0 00:00:51.955 node0 2048kB 0 / 0 00:00:51.955 node1 1048576kB 0 / 0 00:00:51.955 node1 2048kB 0 / 0 00:00:51.955 00:00:51.955 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:51.955 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:51.955 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:51.955 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:52.213 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:52.213 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:52.213 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:52.213 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:52.213 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:52.213 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:52.213 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:52.213 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:52.213 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:52.213 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:52.213 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:52.213 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:52.213 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:52.213 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:52.213 + rm -f /tmp/spdk-ld-path 00:00:52.213 + source autorun-spdk.conf 00:00:52.213 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.214 ++ SPDK_TEST_NVMF=1 00:00:52.214 ++ SPDK_TEST_NVME_CLI=1 00:00:52.214 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.214 ++ SPDK_TEST_NVMF_NICS=e810 00:00:52.214 ++ SPDK_TEST_VFIOUSER=1 00:00:52.214 ++ SPDK_RUN_UBSAN=1 00:00:52.214 ++ NET_TYPE=phy 00:00:52.214 ++ RUN_NIGHTLY=0 00:00:52.214 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:52.214 + [[ -n '' ]] 00:00:52.214 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:52.214 + for M in /var/spdk/build-*-manifest.txt 00:00:52.214 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:52.214 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:52.214 + for M in /var/spdk/build-*-manifest.txt 00:00:52.214 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:52.214 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:52.214 ++ uname 00:00:52.214 + [[ Linux == \L\i\n\u\x ]] 00:00:52.214 + sudo dmesg -T 00:00:52.214 + sudo dmesg --clear 00:00:52.214 + dmesg_pid=573761 00:00:52.214 + [[ Fedora Linux == FreeBSD ]] 00:00:52.214 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:52.214 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:52.214 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:52.214 + sudo dmesg -Tw 00:00:52.214 + [[ -x /usr/src/fio-static/fio ]] 00:00:52.214 + export FIO_BIN=/usr/src/fio-static/fio 00:00:52.214 + FIO_BIN=/usr/src/fio-static/fio 00:00:52.214 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:52.214 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:52.214 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:52.214 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:52.214 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:52.214 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:52.214 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:52.214 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:52.214 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:52.214 Test configuration: 00:00:52.214 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.214 SPDK_TEST_NVMF=1 00:00:52.214 SPDK_TEST_NVME_CLI=1 00:00:52.214 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.214 SPDK_TEST_NVMF_NICS=e810 00:00:52.214 SPDK_TEST_VFIOUSER=1 00:00:52.214 SPDK_RUN_UBSAN=1 00:00:52.214 NET_TYPE=phy 00:00:52.214 RUN_NIGHTLY=0 15:53:38 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:52.214 15:53:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:52.214 15:53:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:52.214 15:53:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:52.214 15:53:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.214 15:53:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.214 15:53:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.214 15:53:38 -- paths/export.sh@5 -- $ export PATH 00:00:52.214 15:53:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.214 15:53:38 -- common/autobuild_common.sh@472 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:52.214 15:53:38 -- common/autobuild_common.sh@473 -- $ date +%s 00:00:52.214 15:53:38 -- common/autobuild_common.sh@473 -- $ mktemp -dt spdk_1721051618.XXXXXX 00:00:52.214 15:53:38 -- common/autobuild_common.sh@473 -- $ SPDK_WORKSPACE=/tmp/spdk_1721051618.VcoTaz 00:00:52.214 15:53:38 -- common/autobuild_common.sh@475 -- $ [[ -n '' ]] 00:00:52.214 15:53:38 -- common/autobuild_common.sh@479 -- $ '[' -n '' ']' 00:00:52.214 15:53:38 -- common/autobuild_common.sh@482 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:52.214 15:53:38 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:52.214 15:53:38 -- common/autobuild_common.sh@488 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:52.214 15:53:38 -- common/autobuild_common.sh@489 -- $ get_config_params 00:00:52.214 15:53:38 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:52.214 15:53:38 -- common/autotest_common.sh@10 -- $ set +x 00:00:52.214 15:53:38 -- common/autobuild_common.sh@489 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:52.214 15:53:38 -- common/autobuild_common.sh@491 -- $ start_monitor_resources 00:00:52.214 15:53:38 -- pm/common@17 -- $ local monitor 00:00:52.214 15:53:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.214 15:53:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.214 15:53:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.214 15:53:38 -- pm/common@21 -- $ date +%s 00:00:52.214 15:53:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.214 15:53:38 -- pm/common@21 -- $ date +%s 00:00:52.214 15:53:38 -- pm/common@25 -- $ sleep 1 00:00:52.214 15:53:38 -- pm/common@21 -- $ date +%s 00:00:52.214 15:53:38 -- pm/common@21 -- $ date +%s 00:00:52.214 15:53:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721051618 00:00:52.214 15:53:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721051618 00:00:52.214 15:53:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721051618 00:00:52.476 15:53:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721051618 00:00:52.476 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721051618_collect-vmstat.pm.log 00:00:52.476 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721051618_collect-cpu-load.pm.log 00:00:52.476 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721051618_collect-cpu-temp.pm.log 00:00:52.476 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721051618_collect-bmc-pm.bmc.pm.log 00:00:53.416 15:53:39 -- common/autobuild_common.sh@492 -- $ trap stop_monitor_resources EXIT 00:00:53.416 15:53:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:53.416 15:53:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:53.416 15:53:39 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:53.416 15:53:39 -- spdk/autobuild.sh@16 -- $ date -u 00:00:53.416 Mon Jul 15 01:53:39 PM UTC 2024 00:00:53.416 15:53:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:53.416 v24.09-pre-204-g255871c19 00:00:53.416 15:53:39 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:53.416 15:53:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:53.416 15:53:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:53.416 15:53:39 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:53.416 15:53:39 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:53.416 15:53:39 -- common/autotest_common.sh@10 -- $ set +x 00:00:53.416 ************************************ 00:00:53.416 START TEST ubsan 00:00:53.416 ************************************ 00:00:53.416 15:53:39 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:53.416 using ubsan 00:00:53.416 00:00:53.416 real 0m0.000s 00:00:53.416 user 0m0.000s 00:00:53.416 sys 0m0.000s 00:00:53.416 15:53:39 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:53.416 15:53:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:53.416 ************************************ 00:00:53.416 END TEST ubsan 00:00:53.416 ************************************ 00:00:53.416 15:53:39 -- common/autotest_common.sh@1142 -- $ return 0 00:00:53.416 15:53:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:53.416 15:53:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:53.416 15:53:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:53.416 15:53:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:53.416 15:53:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:53.416 15:53:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:53.416 15:53:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:53.416 15:53:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:53.416 15:53:39 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:53.417 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:53.417 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:53.676 Using 'verbs' RDMA provider 00:01:04.596 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:14.581 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:14.581 Creating mk/config.mk...done. 00:01:14.581 Creating mk/cc.flags.mk...done. 00:01:14.581 Type 'make' to build. 00:01:14.581 15:54:00 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:14.581 15:54:00 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:14.581 15:54:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:14.581 15:54:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.581 ************************************ 00:01:14.581 START TEST make 00:01:14.581 ************************************ 00:01:14.581 15:54:00 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:14.581 make[1]: Nothing to be done for 'all'. 00:01:15.965 The Meson build system 00:01:15.965 Version: 1.3.1 00:01:15.965 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:15.965 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:15.965 Build type: native build 00:01:15.965 Project name: libvfio-user 00:01:15.965 Project version: 0.0.1 00:01:15.965 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:15.965 C linker for the host machine: cc ld.bfd 2.39-16 00:01:15.965 Host machine cpu family: x86_64 00:01:15.965 Host machine cpu: x86_64 00:01:15.965 Run-time dependency threads found: YES 00:01:15.965 Library dl found: YES 00:01:15.965 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:15.965 Run-time dependency json-c found: YES 0.17 00:01:15.965 Run-time dependency cmocka found: YES 1.1.7 00:01:15.965 Program pytest-3 found: NO 00:01:15.965 Program flake8 found: NO 00:01:15.965 Program misspell-fixer found: NO 00:01:15.965 Program restructuredtext-lint found: NO 00:01:15.965 Program valgrind found: YES (/usr/bin/valgrind) 00:01:15.965 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:15.965 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:15.965 Compiler for C supports arguments -Wwrite-strings: YES 00:01:15.965 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:15.965 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:15.965 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:15.965 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:15.965 Build targets in project: 8 00:01:15.965 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:15.965 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:15.965 00:01:15.965 libvfio-user 0.0.1 00:01:15.965 00:01:15.965 User defined options 00:01:15.965 buildtype : debug 00:01:15.965 default_library: shared 00:01:15.965 libdir : /usr/local/lib 00:01:15.965 00:01:15.965 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:16.924 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:16.924 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:16.924 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:16.924 [3/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:16.924 [4/37] Compiling C object samples/null.p/null.c.o 00:01:16.924 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:16.924 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:16.924 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:16.924 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:16.924 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:16.924 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:16.925 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:16.925 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:16.925 [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:16.925 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:16.925 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:17.185 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:17.185 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:17.185 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:17.185 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:17.185 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:17.185 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:17.185 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:17.185 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:17.185 [24/37] Compiling C object samples/server.p/server.c.o 00:01:17.185 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:17.185 [26/37] Compiling C object samples/client.p/client.c.o 00:01:17.185 [27/37] Linking target samples/client 00:01:17.185 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:17.447 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:17.447 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:17.447 [31/37] Linking target test/unit_tests 00:01:17.447 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:17.711 [33/37] Linking target samples/lspci 00:01:17.711 [34/37] Linking target samples/server 00:01:17.711 [35/37] Linking target samples/gpio-pci-idio-16 00:01:17.711 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:17.711 [37/37] Linking target samples/null 00:01:17.711 INFO: autodetecting backend as ninja 00:01:17.711 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:17.711 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:18.284 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:18.284 ninja: no work to do. 00:01:23.565 The Meson build system 00:01:23.565 Version: 1.3.1 00:01:23.565 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:23.565 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:23.565 Build type: native build 00:01:23.565 Program cat found: YES (/usr/bin/cat) 00:01:23.565 Project name: DPDK 00:01:23.565 Project version: 24.03.0 00:01:23.565 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:23.565 C linker for the host machine: cc ld.bfd 2.39-16 00:01:23.565 Host machine cpu family: x86_64 00:01:23.565 Host machine cpu: x86_64 00:01:23.565 Message: ## Building in Developer Mode ## 00:01:23.565 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:23.565 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:23.565 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:23.565 Program python3 found: YES (/usr/bin/python3) 00:01:23.565 Program cat found: YES (/usr/bin/cat) 00:01:23.565 Compiler for C supports arguments -march=native: YES 00:01:23.565 Checking for size of "void *" : 8 00:01:23.565 Checking for size of "void *" : 8 (cached) 00:01:23.565 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:23.565 Library m found: YES 00:01:23.565 Library numa found: YES 00:01:23.565 Has header "numaif.h" : YES 00:01:23.565 Library fdt found: NO 00:01:23.565 Library execinfo found: NO 00:01:23.565 Has header "execinfo.h" : YES 00:01:23.565 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:23.565 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:23.565 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:23.565 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:23.565 Run-time dependency openssl found: YES 3.0.9 00:01:23.565 Run-time dependency libpcap found: YES 1.10.4 00:01:23.565 Has header "pcap.h" with dependency libpcap: YES 00:01:23.565 Compiler for C supports arguments -Wcast-qual: YES 00:01:23.565 Compiler for C supports arguments -Wdeprecated: YES 00:01:23.565 Compiler for C supports arguments -Wformat: YES 00:01:23.565 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:23.565 Compiler for C supports arguments -Wformat-security: NO 00:01:23.565 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:23.565 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:23.565 Compiler for C supports arguments -Wnested-externs: YES 00:01:23.565 Compiler for C supports arguments -Wold-style-definition: YES 00:01:23.565 Compiler for C supports arguments -Wpointer-arith: YES 00:01:23.565 Compiler for C supports arguments -Wsign-compare: YES 00:01:23.565 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:23.565 Compiler for C supports arguments -Wundef: YES 00:01:23.565 Compiler for C supports arguments -Wwrite-strings: YES 00:01:23.565 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:23.565 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:23.565 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:23.565 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:23.565 Program objdump found: YES (/usr/bin/objdump) 00:01:23.565 Compiler for C supports arguments -mavx512f: YES 00:01:23.565 Checking if "AVX512 checking" compiles: YES 00:01:23.565 Fetching value of define "__SSE4_2__" : 1 00:01:23.565 Fetching value of define "__AES__" : 1 00:01:23.565 Fetching value of define "__AVX__" : 1 00:01:23.565 Fetching value of define "__AVX2__" : (undefined) 00:01:23.565 Fetching value of define "__AVX512BW__" : (undefined) 00:01:23.565 Fetching value of define "__AVX512CD__" : (undefined) 00:01:23.565 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:23.565 Fetching value of define "__AVX512F__" : (undefined) 00:01:23.565 Fetching value of define "__AVX512VL__" : (undefined) 00:01:23.565 Fetching value of define "__PCLMUL__" : 1 00:01:23.565 Fetching value of define "__RDRND__" : 1 00:01:23.565 Fetching value of define "__RDSEED__" : (undefined) 00:01:23.565 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:23.565 Fetching value of define "__znver1__" : (undefined) 00:01:23.565 Fetching value of define "__znver2__" : (undefined) 00:01:23.565 Fetching value of define "__znver3__" : (undefined) 00:01:23.565 Fetching value of define "__znver4__" : (undefined) 00:01:23.565 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:23.565 Message: lib/log: Defining dependency "log" 00:01:23.565 Message: lib/kvargs: Defining dependency "kvargs" 00:01:23.565 Message: lib/telemetry: Defining dependency "telemetry" 00:01:23.565 Checking for function "getentropy" : NO 00:01:23.565 Message: lib/eal: Defining dependency "eal" 00:01:23.565 Message: lib/ring: Defining dependency "ring" 00:01:23.565 Message: lib/rcu: Defining dependency "rcu" 00:01:23.565 Message: lib/mempool: Defining dependency "mempool" 00:01:23.565 Message: lib/mbuf: Defining dependency "mbuf" 00:01:23.565 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:23.565 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:23.565 Compiler for C supports arguments -mpclmul: YES 00:01:23.565 Compiler for C supports arguments -maes: YES 00:01:23.565 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:23.565 Compiler for C supports arguments -mavx512bw: YES 00:01:23.565 Compiler for C supports arguments -mavx512dq: YES 00:01:23.565 Compiler for C supports arguments -mavx512vl: YES 00:01:23.565 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:23.565 Compiler for C supports arguments -mavx2: YES 00:01:23.565 Compiler for C supports arguments -mavx: YES 00:01:23.565 Message: lib/net: Defining dependency "net" 00:01:23.565 Message: lib/meter: Defining dependency "meter" 00:01:23.565 Message: lib/ethdev: Defining dependency "ethdev" 00:01:23.565 Message: lib/pci: Defining dependency "pci" 00:01:23.565 Message: lib/cmdline: Defining dependency "cmdline" 00:01:23.565 Message: lib/hash: Defining dependency "hash" 00:01:23.565 Message: lib/timer: Defining dependency "timer" 00:01:23.565 Message: lib/compressdev: Defining dependency "compressdev" 00:01:23.565 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:23.565 Message: lib/dmadev: Defining dependency "dmadev" 00:01:23.565 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:23.565 Message: lib/power: Defining dependency "power" 00:01:23.565 Message: lib/reorder: Defining dependency "reorder" 00:01:23.565 Message: lib/security: Defining dependency "security" 00:01:23.565 Has header "linux/userfaultfd.h" : YES 00:01:23.565 Has header "linux/vduse.h" : YES 00:01:23.565 Message: lib/vhost: Defining dependency "vhost" 00:01:23.565 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:23.565 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:23.565 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:23.565 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:23.565 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:23.565 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:23.565 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:23.565 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:23.565 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:23.565 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:23.565 Program doxygen found: YES (/usr/bin/doxygen) 00:01:23.565 Configuring doxy-api-html.conf using configuration 00:01:23.565 Configuring doxy-api-man.conf using configuration 00:01:23.565 Program mandb found: YES (/usr/bin/mandb) 00:01:23.565 Program sphinx-build found: NO 00:01:23.565 Configuring rte_build_config.h using configuration 00:01:23.565 Message: 00:01:23.565 ================= 00:01:23.565 Applications Enabled 00:01:23.565 ================= 00:01:23.565 00:01:23.565 apps: 00:01:23.565 00:01:23.565 00:01:23.565 Message: 00:01:23.565 ================= 00:01:23.565 Libraries Enabled 00:01:23.565 ================= 00:01:23.565 00:01:23.565 libs: 00:01:23.565 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:23.566 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:23.566 cryptodev, dmadev, power, reorder, security, vhost, 00:01:23.566 00:01:23.566 Message: 00:01:23.566 =============== 00:01:23.566 Drivers Enabled 00:01:23.566 =============== 00:01:23.566 00:01:23.566 common: 00:01:23.566 00:01:23.566 bus: 00:01:23.566 pci, vdev, 00:01:23.566 mempool: 00:01:23.566 ring, 00:01:23.566 dma: 00:01:23.566 00:01:23.566 net: 00:01:23.566 00:01:23.566 crypto: 00:01:23.566 00:01:23.566 compress: 00:01:23.566 00:01:23.566 vdpa: 00:01:23.566 00:01:23.566 00:01:23.566 Message: 00:01:23.566 ================= 00:01:23.566 Content Skipped 00:01:23.566 ================= 00:01:23.566 00:01:23.566 apps: 00:01:23.566 dumpcap: explicitly disabled via build config 00:01:23.566 graph: explicitly disabled via build config 00:01:23.566 pdump: explicitly disabled via build config 00:01:23.566 proc-info: explicitly disabled via build config 00:01:23.566 test-acl: explicitly disabled via build config 00:01:23.566 test-bbdev: explicitly disabled via build config 00:01:23.566 test-cmdline: explicitly disabled via build config 00:01:23.566 test-compress-perf: explicitly disabled via build config 00:01:23.566 test-crypto-perf: explicitly disabled via build config 00:01:23.566 test-dma-perf: explicitly disabled via build config 00:01:23.566 test-eventdev: explicitly disabled via build config 00:01:23.566 test-fib: explicitly disabled via build config 00:01:23.566 test-flow-perf: explicitly disabled via build config 00:01:23.566 test-gpudev: explicitly disabled via build config 00:01:23.566 test-mldev: explicitly disabled via build config 00:01:23.566 test-pipeline: explicitly disabled via build config 00:01:23.566 test-pmd: explicitly disabled via build config 00:01:23.566 test-regex: explicitly disabled via build config 00:01:23.566 test-sad: explicitly disabled via build config 00:01:23.566 test-security-perf: explicitly disabled via build config 00:01:23.566 00:01:23.566 libs: 00:01:23.566 argparse: explicitly disabled via build config 00:01:23.566 metrics: explicitly disabled via build config 00:01:23.566 acl: explicitly disabled via build config 00:01:23.566 bbdev: explicitly disabled via build config 00:01:23.566 bitratestats: explicitly disabled via build config 00:01:23.566 bpf: explicitly disabled via build config 00:01:23.566 cfgfile: explicitly disabled via build config 00:01:23.566 distributor: explicitly disabled via build config 00:01:23.566 efd: explicitly disabled via build config 00:01:23.566 eventdev: explicitly disabled via build config 00:01:23.566 dispatcher: explicitly disabled via build config 00:01:23.566 gpudev: explicitly disabled via build config 00:01:23.566 gro: explicitly disabled via build config 00:01:23.566 gso: explicitly disabled via build config 00:01:23.566 ip_frag: explicitly disabled via build config 00:01:23.566 jobstats: explicitly disabled via build config 00:01:23.566 latencystats: explicitly disabled via build config 00:01:23.566 lpm: explicitly disabled via build config 00:01:23.566 member: explicitly disabled via build config 00:01:23.566 pcapng: explicitly disabled via build config 00:01:23.566 rawdev: explicitly disabled via build config 00:01:23.566 regexdev: explicitly disabled via build config 00:01:23.566 mldev: explicitly disabled via build config 00:01:23.566 rib: explicitly disabled via build config 00:01:23.566 sched: explicitly disabled via build config 00:01:23.566 stack: explicitly disabled via build config 00:01:23.566 ipsec: explicitly disabled via build config 00:01:23.566 pdcp: explicitly disabled via build config 00:01:23.566 fib: explicitly disabled via build config 00:01:23.566 port: explicitly disabled via build config 00:01:23.566 pdump: explicitly disabled via build config 00:01:23.566 table: explicitly disabled via build config 00:01:23.566 pipeline: explicitly disabled via build config 00:01:23.566 graph: explicitly disabled via build config 00:01:23.566 node: explicitly disabled via build config 00:01:23.566 00:01:23.566 drivers: 00:01:23.566 common/cpt: not in enabled drivers build config 00:01:23.566 common/dpaax: not in enabled drivers build config 00:01:23.566 common/iavf: not in enabled drivers build config 00:01:23.566 common/idpf: not in enabled drivers build config 00:01:23.566 common/ionic: not in enabled drivers build config 00:01:23.566 common/mvep: not in enabled drivers build config 00:01:23.566 common/octeontx: not in enabled drivers build config 00:01:23.566 bus/auxiliary: not in enabled drivers build config 00:01:23.566 bus/cdx: not in enabled drivers build config 00:01:23.566 bus/dpaa: not in enabled drivers build config 00:01:23.566 bus/fslmc: not in enabled drivers build config 00:01:23.566 bus/ifpga: not in enabled drivers build config 00:01:23.566 bus/platform: not in enabled drivers build config 00:01:23.566 bus/uacce: not in enabled drivers build config 00:01:23.566 bus/vmbus: not in enabled drivers build config 00:01:23.566 common/cnxk: not in enabled drivers build config 00:01:23.566 common/mlx5: not in enabled drivers build config 00:01:23.566 common/nfp: not in enabled drivers build config 00:01:23.566 common/nitrox: not in enabled drivers build config 00:01:23.566 common/qat: not in enabled drivers build config 00:01:23.566 common/sfc_efx: not in enabled drivers build config 00:01:23.566 mempool/bucket: not in enabled drivers build config 00:01:23.566 mempool/cnxk: not in enabled drivers build config 00:01:23.566 mempool/dpaa: not in enabled drivers build config 00:01:23.566 mempool/dpaa2: not in enabled drivers build config 00:01:23.566 mempool/octeontx: not in enabled drivers build config 00:01:23.566 mempool/stack: not in enabled drivers build config 00:01:23.566 dma/cnxk: not in enabled drivers build config 00:01:23.566 dma/dpaa: not in enabled drivers build config 00:01:23.566 dma/dpaa2: not in enabled drivers build config 00:01:23.566 dma/hisilicon: not in enabled drivers build config 00:01:23.566 dma/idxd: not in enabled drivers build config 00:01:23.566 dma/ioat: not in enabled drivers build config 00:01:23.566 dma/skeleton: not in enabled drivers build config 00:01:23.566 net/af_packet: not in enabled drivers build config 00:01:23.566 net/af_xdp: not in enabled drivers build config 00:01:23.566 net/ark: not in enabled drivers build config 00:01:23.566 net/atlantic: not in enabled drivers build config 00:01:23.566 net/avp: not in enabled drivers build config 00:01:23.566 net/axgbe: not in enabled drivers build config 00:01:23.566 net/bnx2x: not in enabled drivers build config 00:01:23.566 net/bnxt: not in enabled drivers build config 00:01:23.566 net/bonding: not in enabled drivers build config 00:01:23.566 net/cnxk: not in enabled drivers build config 00:01:23.566 net/cpfl: not in enabled drivers build config 00:01:23.566 net/cxgbe: not in enabled drivers build config 00:01:23.566 net/dpaa: not in enabled drivers build config 00:01:23.566 net/dpaa2: not in enabled drivers build config 00:01:23.566 net/e1000: not in enabled drivers build config 00:01:23.566 net/ena: not in enabled drivers build config 00:01:23.566 net/enetc: not in enabled drivers build config 00:01:23.566 net/enetfec: not in enabled drivers build config 00:01:23.566 net/enic: not in enabled drivers build config 00:01:23.566 net/failsafe: not in enabled drivers build config 00:01:23.566 net/fm10k: not in enabled drivers build config 00:01:23.566 net/gve: not in enabled drivers build config 00:01:23.566 net/hinic: not in enabled drivers build config 00:01:23.566 net/hns3: not in enabled drivers build config 00:01:23.566 net/i40e: not in enabled drivers build config 00:01:23.566 net/iavf: not in enabled drivers build config 00:01:23.566 net/ice: not in enabled drivers build config 00:01:23.566 net/idpf: not in enabled drivers build config 00:01:23.566 net/igc: not in enabled drivers build config 00:01:23.566 net/ionic: not in enabled drivers build config 00:01:23.566 net/ipn3ke: not in enabled drivers build config 00:01:23.566 net/ixgbe: not in enabled drivers build config 00:01:23.566 net/mana: not in enabled drivers build config 00:01:23.566 net/memif: not in enabled drivers build config 00:01:23.566 net/mlx4: not in enabled drivers build config 00:01:23.566 net/mlx5: not in enabled drivers build config 00:01:23.566 net/mvneta: not in enabled drivers build config 00:01:23.566 net/mvpp2: not in enabled drivers build config 00:01:23.566 net/netvsc: not in enabled drivers build config 00:01:23.566 net/nfb: not in enabled drivers build config 00:01:23.566 net/nfp: not in enabled drivers build config 00:01:23.566 net/ngbe: not in enabled drivers build config 00:01:23.566 net/null: not in enabled drivers build config 00:01:23.566 net/octeontx: not in enabled drivers build config 00:01:23.566 net/octeon_ep: not in enabled drivers build config 00:01:23.566 net/pcap: not in enabled drivers build config 00:01:23.566 net/pfe: not in enabled drivers build config 00:01:23.566 net/qede: not in enabled drivers build config 00:01:23.566 net/ring: not in enabled drivers build config 00:01:23.566 net/sfc: not in enabled drivers build config 00:01:23.566 net/softnic: not in enabled drivers build config 00:01:23.566 net/tap: not in enabled drivers build config 00:01:23.566 net/thunderx: not in enabled drivers build config 00:01:23.566 net/txgbe: not in enabled drivers build config 00:01:23.566 net/vdev_netvsc: not in enabled drivers build config 00:01:23.566 net/vhost: not in enabled drivers build config 00:01:23.566 net/virtio: not in enabled drivers build config 00:01:23.566 net/vmxnet3: not in enabled drivers build config 00:01:23.566 raw/*: missing internal dependency, "rawdev" 00:01:23.566 crypto/armv8: not in enabled drivers build config 00:01:23.566 crypto/bcmfs: not in enabled drivers build config 00:01:23.566 crypto/caam_jr: not in enabled drivers build config 00:01:23.566 crypto/ccp: not in enabled drivers build config 00:01:23.566 crypto/cnxk: not in enabled drivers build config 00:01:23.566 crypto/dpaa_sec: not in enabled drivers build config 00:01:23.566 crypto/dpaa2_sec: not in enabled drivers build config 00:01:23.566 crypto/ipsec_mb: not in enabled drivers build config 00:01:23.566 crypto/mlx5: not in enabled drivers build config 00:01:23.566 crypto/mvsam: not in enabled drivers build config 00:01:23.566 crypto/nitrox: not in enabled drivers build config 00:01:23.566 crypto/null: not in enabled drivers build config 00:01:23.566 crypto/octeontx: not in enabled drivers build config 00:01:23.566 crypto/openssl: not in enabled drivers build config 00:01:23.566 crypto/scheduler: not in enabled drivers build config 00:01:23.566 crypto/uadk: not in enabled drivers build config 00:01:23.566 crypto/virtio: not in enabled drivers build config 00:01:23.566 compress/isal: not in enabled drivers build config 00:01:23.566 compress/mlx5: not in enabled drivers build config 00:01:23.566 compress/nitrox: not in enabled drivers build config 00:01:23.566 compress/octeontx: not in enabled drivers build config 00:01:23.566 compress/zlib: not in enabled drivers build config 00:01:23.566 regex/*: missing internal dependency, "regexdev" 00:01:23.566 ml/*: missing internal dependency, "mldev" 00:01:23.566 vdpa/ifc: not in enabled drivers build config 00:01:23.566 vdpa/mlx5: not in enabled drivers build config 00:01:23.566 vdpa/nfp: not in enabled drivers build config 00:01:23.566 vdpa/sfc: not in enabled drivers build config 00:01:23.566 event/*: missing internal dependency, "eventdev" 00:01:23.566 baseband/*: missing internal dependency, "bbdev" 00:01:23.566 gpu/*: missing internal dependency, "gpudev" 00:01:23.566 00:01:23.566 00:01:23.566 Build targets in project: 85 00:01:23.566 00:01:23.566 DPDK 24.03.0 00:01:23.566 00:01:23.566 User defined options 00:01:23.566 buildtype : debug 00:01:23.566 default_library : shared 00:01:23.566 libdir : lib 00:01:23.567 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:23.567 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:23.567 c_link_args : 00:01:23.567 cpu_instruction_set: native 00:01:23.567 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:23.567 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:23.567 enable_docs : false 00:01:23.567 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:23.567 enable_kmods : false 00:01:23.567 max_lcores : 128 00:01:23.567 tests : false 00:01:23.567 00:01:23.567 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:23.567 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:23.828 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:23.828 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:23.828 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:23.828 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:23.828 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:23.828 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:23.828 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:23.828 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:23.828 [9/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:23.828 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:23.828 [11/268] Linking static target lib/librte_kvargs.a 00:01:23.828 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:23.828 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:23.828 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:23.828 [15/268] Linking static target lib/librte_log.a 00:01:23.828 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:24.397 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.658 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:24.658 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:24.658 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:24.658 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:24.658 [22/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:24.658 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:24.658 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:24.658 [25/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:24.658 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:24.658 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:24.658 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:24.658 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:24.658 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:24.658 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:24.658 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:24.658 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:24.658 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:24.658 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:24.658 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:24.658 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:24.658 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:24.658 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:24.658 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:24.658 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:24.658 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:24.658 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:24.658 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:24.658 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:24.658 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:24.658 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:24.658 [48/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:24.658 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:24.658 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:24.658 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:24.658 [52/268] Linking static target lib/librte_telemetry.a 00:01:24.918 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:24.918 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:24.918 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:24.918 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:24.918 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:24.918 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:24.918 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:24.918 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:24.918 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:24.918 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:24.918 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:24.918 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:24.918 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.179 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:25.179 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:25.179 [68/268] Linking target lib/librte_log.so.24.1 00:01:25.179 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:25.179 [70/268] Linking static target lib/librte_pci.a 00:01:25.439 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:25.439 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:25.439 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:25.439 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:25.439 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:25.439 [76/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:25.439 [77/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:25.439 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:25.439 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:25.703 [80/268] Linking target lib/librte_kvargs.so.24.1 00:01:25.703 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:25.703 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:25.703 [83/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:25.703 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:25.703 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:25.703 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:25.703 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:25.703 [88/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.703 [89/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:25.703 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:25.703 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:25.703 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:25.703 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:25.703 [94/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:25.703 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:25.703 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:25.703 [97/268] Linking static target lib/librte_ring.a 00:01:25.703 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:25.703 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:25.703 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:25.703 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:25.703 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:25.703 [103/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:25.703 [104/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:25.965 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:25.965 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:25.965 [107/268] Linking static target lib/librte_meter.a 00:01:25.965 [108/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:25.965 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:25.965 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:25.965 [111/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.965 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:25.965 [113/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:25.965 [114/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:25.965 [115/268] Linking static target lib/librte_rcu.a 00:01:25.965 [116/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:25.965 [117/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:25.965 [118/268] Linking static target lib/librte_eal.a 00:01:25.965 [119/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:25.965 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:25.965 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:25.965 [122/268] Linking target lib/librte_telemetry.so.24.1 00:01:25.965 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:25.965 [124/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:25.965 [125/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:25.965 [126/268] Linking static target lib/librte_mempool.a 00:01:25.965 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:25.965 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:26.223 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:26.223 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:26.223 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:26.223 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:26.223 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:26.223 [134/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:26.223 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.223 [136/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.223 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:26.483 [138/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:26.483 [139/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.483 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:26.483 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:26.483 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:26.483 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:26.483 [144/268] Linking static target lib/librte_cmdline.a 00:01:26.483 [145/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:26.483 [146/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:26.483 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:26.483 [148/268] Linking static target lib/librte_net.a 00:01:26.743 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:26.743 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:26.743 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:26.743 [152/268] Linking static target lib/librte_timer.a 00:01:26.743 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:26.743 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:26.743 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:26.743 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:27.004 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:27.004 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:27.004 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:27.004 [160/268] Linking static target lib/librte_dmadev.a 00:01:27.004 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:27.004 [162/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.004 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:27.004 [164/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:27.004 [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:27.004 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:27.004 [167/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:27.004 [168/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:27.292 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:27.292 [170/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:27.292 [171/268] Linking static target lib/librte_compressdev.a 00:01:27.292 [172/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.292 [173/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:27.292 [174/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.292 [175/268] Linking static target lib/librte_power.a 00:01:27.292 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:27.292 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:27.292 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:27.292 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:27.292 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:27.292 [181/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:27.292 [182/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:27.292 [183/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:27.292 [184/268] Linking static target lib/librte_hash.a 00:01:27.292 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:27.292 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:27.292 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:27.292 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:27.557 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:27.557 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:27.557 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:27.557 [192/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.557 [193/268] Linking static target lib/librte_reorder.a 00:01:27.557 [194/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:27.557 [195/268] Linking static target lib/librte_mbuf.a 00:01:27.557 [196/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.557 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:27.557 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:27.557 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:27.557 [200/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:27.557 [201/268] Linking static target drivers/librte_bus_vdev.a 00:01:27.557 [202/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:27.557 [203/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:27.557 [204/268] Linking static target lib/librte_security.a 00:01:27.557 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:27.557 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:27.557 [207/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.557 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:27.557 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:27.557 [210/268] Linking static target drivers/librte_bus_pci.a 00:01:27.815 [211/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.815 [212/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.815 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:27.815 [214/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:27.815 [215/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:27.815 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:27.815 [217/268] Linking static target drivers/librte_mempool_ring.a 00:01:27.815 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.815 [219/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.815 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:27.815 [221/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.073 [222/268] Linking static target lib/librte_ethdev.a 00:01:28.073 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.073 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:28.073 [225/268] Linking static target lib/librte_cryptodev.a 00:01:28.073 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.006 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.380 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:32.278 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.278 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.278 [231/268] Linking target lib/librte_eal.so.24.1 00:01:32.278 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:32.278 [233/268] Linking target lib/librte_ring.so.24.1 00:01:32.278 [234/268] Linking target lib/librte_timer.so.24.1 00:01:32.279 [235/268] Linking target lib/librte_meter.so.24.1 00:01:32.279 [236/268] Linking target lib/librte_pci.so.24.1 00:01:32.279 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:32.279 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:32.537 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:32.537 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:32.537 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:32.537 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:32.537 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:32.537 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:32.537 [245/268] Linking target lib/librte_rcu.so.24.1 00:01:32.537 [246/268] Linking target lib/librte_mempool.so.24.1 00:01:32.795 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:32.795 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:32.795 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:32.795 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:32.795 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:32.795 [252/268] Linking target lib/librte_compressdev.so.24.1 00:01:32.795 [253/268] Linking target lib/librte_reorder.so.24.1 00:01:32.795 [254/268] Linking target lib/librte_net.so.24.1 00:01:32.795 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:33.053 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:33.053 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:33.053 [258/268] Linking target lib/librte_security.so.24.1 00:01:33.053 [259/268] Linking target lib/librte_hash.so.24.1 00:01:33.053 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:33.053 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:33.053 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:33.311 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:33.311 [264/268] Linking target lib/librte_power.so.24.1 00:01:35.861 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:35.861 [266/268] Linking static target lib/librte_vhost.a 00:01:36.797 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.057 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:37.057 INFO: autodetecting backend as ninja 00:01:37.057 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:37.990 CC lib/ut_mock/mock.o 00:01:37.990 CC lib/ut/ut.o 00:01:37.990 CC lib/log/log.o 00:01:37.990 CC lib/log/log_flags.o 00:01:37.990 CC lib/log/log_deprecated.o 00:01:37.990 LIB libspdk_log.a 00:01:37.990 LIB libspdk_ut.a 00:01:37.990 LIB libspdk_ut_mock.a 00:01:37.990 SO libspdk_ut.so.2.0 00:01:37.990 SO libspdk_ut_mock.so.6.0 00:01:37.990 SO libspdk_log.so.7.0 00:01:37.990 SYMLINK libspdk_ut_mock.so 00:01:37.990 SYMLINK libspdk_ut.so 00:01:37.990 SYMLINK libspdk_log.so 00:01:38.248 CC lib/dma/dma.o 00:01:38.248 CXX lib/trace_parser/trace.o 00:01:38.248 CC lib/ioat/ioat.o 00:01:38.248 CC lib/util/base64.o 00:01:38.248 CC lib/util/bit_array.o 00:01:38.248 CC lib/util/cpuset.o 00:01:38.248 CC lib/util/crc16.o 00:01:38.248 CC lib/util/crc32.o 00:01:38.248 CC lib/util/crc32c.o 00:01:38.248 CC lib/util/crc32_ieee.o 00:01:38.248 CC lib/util/crc64.o 00:01:38.248 CC lib/util/dif.o 00:01:38.248 CC lib/util/fd.o 00:01:38.248 CC lib/util/file.o 00:01:38.248 CC lib/util/hexlify.o 00:01:38.248 CC lib/util/iov.o 00:01:38.248 CC lib/util/math.o 00:01:38.248 CC lib/util/pipe.o 00:01:38.248 CC lib/util/strerror_tls.o 00:01:38.248 CC lib/util/string.o 00:01:38.248 CC lib/util/uuid.o 00:01:38.248 CC lib/util/fd_group.o 00:01:38.248 CC lib/util/xor.o 00:01:38.248 CC lib/util/zipf.o 00:01:38.505 CC lib/vfio_user/host/vfio_user_pci.o 00:01:38.505 CC lib/vfio_user/host/vfio_user.o 00:01:38.505 LIB libspdk_dma.a 00:01:38.505 SO libspdk_dma.so.4.0 00:01:38.505 SYMLINK libspdk_dma.so 00:01:38.505 LIB libspdk_ioat.a 00:01:38.505 SO libspdk_ioat.so.7.0 00:01:38.762 SYMLINK libspdk_ioat.so 00:01:38.762 LIB libspdk_vfio_user.a 00:01:38.762 SO libspdk_vfio_user.so.5.0 00:01:38.762 SYMLINK libspdk_vfio_user.so 00:01:38.762 LIB libspdk_util.a 00:01:38.762 SO libspdk_util.so.9.1 00:01:39.019 SYMLINK libspdk_util.so 00:01:39.276 CC lib/rdma_utils/rdma_utils.o 00:01:39.277 CC lib/conf/conf.o 00:01:39.277 CC lib/env_dpdk/env.o 00:01:39.277 CC lib/idxd/idxd.o 00:01:39.277 CC lib/rdma_provider/common.o 00:01:39.277 CC lib/vmd/vmd.o 00:01:39.277 CC lib/json/json_parse.o 00:01:39.277 CC lib/idxd/idxd_user.o 00:01:39.277 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:39.277 CC lib/env_dpdk/memory.o 00:01:39.277 CC lib/vmd/led.o 00:01:39.277 CC lib/idxd/idxd_kernel.o 00:01:39.277 CC lib/json/json_util.o 00:01:39.277 CC lib/env_dpdk/pci.o 00:01:39.277 CC lib/json/json_write.o 00:01:39.277 CC lib/env_dpdk/init.o 00:01:39.277 CC lib/env_dpdk/threads.o 00:01:39.277 CC lib/env_dpdk/pci_ioat.o 00:01:39.277 CC lib/env_dpdk/pci_virtio.o 00:01:39.277 CC lib/env_dpdk/pci_vmd.o 00:01:39.277 CC lib/env_dpdk/pci_idxd.o 00:01:39.277 CC lib/env_dpdk/pci_event.o 00:01:39.277 CC lib/env_dpdk/sigbus_handler.o 00:01:39.277 CC lib/env_dpdk/pci_dpdk.o 00:01:39.277 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:39.277 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:39.277 LIB libspdk_trace_parser.a 00:01:39.277 SO libspdk_trace_parser.so.5.0 00:01:39.277 SYMLINK libspdk_trace_parser.so 00:01:39.534 LIB libspdk_rdma_provider.a 00:01:39.534 SO libspdk_rdma_provider.so.6.0 00:01:39.534 LIB libspdk_rdma_utils.a 00:01:39.534 SYMLINK libspdk_rdma_provider.so 00:01:39.534 LIB libspdk_json.a 00:01:39.534 SO libspdk_rdma_utils.so.1.0 00:01:39.534 LIB libspdk_conf.a 00:01:39.534 SO libspdk_json.so.6.0 00:01:39.534 SO libspdk_conf.so.6.0 00:01:39.534 SYMLINK libspdk_rdma_utils.so 00:01:39.534 SYMLINK libspdk_json.so 00:01:39.534 SYMLINK libspdk_conf.so 00:01:39.791 CC lib/jsonrpc/jsonrpc_server.o 00:01:39.792 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:39.792 CC lib/jsonrpc/jsonrpc_client.o 00:01:39.792 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:39.792 LIB libspdk_idxd.a 00:01:39.792 SO libspdk_idxd.so.12.0 00:01:39.792 SYMLINK libspdk_idxd.so 00:01:39.792 LIB libspdk_vmd.a 00:01:40.049 SO libspdk_vmd.so.6.0 00:01:40.049 SYMLINK libspdk_vmd.so 00:01:40.049 LIB libspdk_jsonrpc.a 00:01:40.049 SO libspdk_jsonrpc.so.6.0 00:01:40.049 SYMLINK libspdk_jsonrpc.so 00:01:40.308 CC lib/rpc/rpc.o 00:01:40.566 LIB libspdk_rpc.a 00:01:40.566 SO libspdk_rpc.so.6.0 00:01:40.567 SYMLINK libspdk_rpc.so 00:01:40.824 CC lib/notify/notify.o 00:01:40.824 CC lib/trace/trace.o 00:01:40.824 CC lib/keyring/keyring.o 00:01:40.824 CC lib/notify/notify_rpc.o 00:01:40.824 CC lib/trace/trace_flags.o 00:01:40.824 CC lib/keyring/keyring_rpc.o 00:01:40.824 CC lib/trace/trace_rpc.o 00:01:40.824 LIB libspdk_notify.a 00:01:40.824 SO libspdk_notify.so.6.0 00:01:41.082 LIB libspdk_keyring.a 00:01:41.082 SYMLINK libspdk_notify.so 00:01:41.082 LIB libspdk_trace.a 00:01:41.082 SO libspdk_keyring.so.1.0 00:01:41.082 SO libspdk_trace.so.10.0 00:01:41.082 SYMLINK libspdk_keyring.so 00:01:41.082 SYMLINK libspdk_trace.so 00:01:41.082 LIB libspdk_env_dpdk.a 00:01:41.340 SO libspdk_env_dpdk.so.14.1 00:01:41.340 CC lib/thread/thread.o 00:01:41.340 CC lib/thread/iobuf.o 00:01:41.340 CC lib/sock/sock.o 00:01:41.340 CC lib/sock/sock_rpc.o 00:01:41.340 SYMLINK libspdk_env_dpdk.so 00:01:41.604 LIB libspdk_sock.a 00:01:41.604 SO libspdk_sock.so.10.0 00:01:41.604 SYMLINK libspdk_sock.so 00:01:41.862 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:41.862 CC lib/nvme/nvme_ctrlr.o 00:01:41.862 CC lib/nvme/nvme_fabric.o 00:01:41.862 CC lib/nvme/nvme_ns_cmd.o 00:01:41.862 CC lib/nvme/nvme_ns.o 00:01:41.862 CC lib/nvme/nvme_pcie_common.o 00:01:41.862 CC lib/nvme/nvme_pcie.o 00:01:41.862 CC lib/nvme/nvme_qpair.o 00:01:41.862 CC lib/nvme/nvme.o 00:01:41.862 CC lib/nvme/nvme_quirks.o 00:01:41.862 CC lib/nvme/nvme_transport.o 00:01:41.862 CC lib/nvme/nvme_discovery.o 00:01:41.862 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:41.862 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:41.862 CC lib/nvme/nvme_tcp.o 00:01:41.862 CC lib/nvme/nvme_opal.o 00:01:41.862 CC lib/nvme/nvme_io_msg.o 00:01:41.862 CC lib/nvme/nvme_poll_group.o 00:01:41.862 CC lib/nvme/nvme_zns.o 00:01:41.862 CC lib/nvme/nvme_stubs.o 00:01:41.862 CC lib/nvme/nvme_auth.o 00:01:41.862 CC lib/nvme/nvme_cuse.o 00:01:41.862 CC lib/nvme/nvme_vfio_user.o 00:01:41.862 CC lib/nvme/nvme_rdma.o 00:01:42.795 LIB libspdk_thread.a 00:01:42.795 SO libspdk_thread.so.10.1 00:01:42.795 SYMLINK libspdk_thread.so 00:01:43.053 CC lib/virtio/virtio.o 00:01:43.053 CC lib/accel/accel.o 00:01:43.053 CC lib/vfu_tgt/tgt_endpoint.o 00:01:43.053 CC lib/init/json_config.o 00:01:43.053 CC lib/virtio/virtio_vhost_user.o 00:01:43.053 CC lib/vfu_tgt/tgt_rpc.o 00:01:43.053 CC lib/accel/accel_rpc.o 00:01:43.053 CC lib/blob/blobstore.o 00:01:43.053 CC lib/init/subsystem.o 00:01:43.053 CC lib/virtio/virtio_vfio_user.o 00:01:43.053 CC lib/accel/accel_sw.o 00:01:43.053 CC lib/init/subsystem_rpc.o 00:01:43.053 CC lib/virtio/virtio_pci.o 00:01:43.053 CC lib/blob/request.o 00:01:43.053 CC lib/init/rpc.o 00:01:43.053 CC lib/blob/zeroes.o 00:01:43.053 CC lib/blob/blob_bs_dev.o 00:01:43.310 LIB libspdk_init.a 00:01:43.310 SO libspdk_init.so.5.0 00:01:43.310 LIB libspdk_vfu_tgt.a 00:01:43.310 LIB libspdk_virtio.a 00:01:43.567 SYMLINK libspdk_init.so 00:01:43.567 SO libspdk_vfu_tgt.so.3.0 00:01:43.568 SO libspdk_virtio.so.7.0 00:01:43.568 SYMLINK libspdk_vfu_tgt.so 00:01:43.568 SYMLINK libspdk_virtio.so 00:01:43.568 CC lib/event/app.o 00:01:43.568 CC lib/event/reactor.o 00:01:43.568 CC lib/event/log_rpc.o 00:01:43.568 CC lib/event/app_rpc.o 00:01:43.568 CC lib/event/scheduler_static.o 00:01:44.178 LIB libspdk_event.a 00:01:44.178 SO libspdk_event.so.14.0 00:01:44.178 LIB libspdk_accel.a 00:01:44.178 SYMLINK libspdk_event.so 00:01:44.178 SO libspdk_accel.so.15.1 00:01:44.178 SYMLINK libspdk_accel.so 00:01:44.178 LIB libspdk_nvme.a 00:01:44.435 CC lib/bdev/bdev.o 00:01:44.435 CC lib/bdev/bdev_rpc.o 00:01:44.435 CC lib/bdev/bdev_zone.o 00:01:44.435 CC lib/bdev/part.o 00:01:44.435 CC lib/bdev/scsi_nvme.o 00:01:44.435 SO libspdk_nvme.so.13.1 00:01:44.692 SYMLINK libspdk_nvme.so 00:01:46.068 LIB libspdk_blob.a 00:01:46.068 SO libspdk_blob.so.11.0 00:01:46.068 SYMLINK libspdk_blob.so 00:01:46.325 CC lib/blobfs/blobfs.o 00:01:46.325 CC lib/blobfs/tree.o 00:01:46.325 CC lib/lvol/lvol.o 00:01:46.891 LIB libspdk_bdev.a 00:01:46.891 SO libspdk_bdev.so.15.1 00:01:47.157 SYMLINK libspdk_bdev.so 00:01:47.157 LIB libspdk_blobfs.a 00:01:47.157 SO libspdk_blobfs.so.10.0 00:01:47.157 LIB libspdk_lvol.a 00:01:47.157 SYMLINK libspdk_blobfs.so 00:01:47.157 CC lib/ublk/ublk.o 00:01:47.157 CC lib/scsi/dev.o 00:01:47.157 CC lib/nbd/nbd.o 00:01:47.157 CC lib/nvmf/ctrlr.o 00:01:47.157 CC lib/ublk/ublk_rpc.o 00:01:47.157 CC lib/scsi/lun.o 00:01:47.157 CC lib/nbd/nbd_rpc.o 00:01:47.157 CC lib/nvmf/ctrlr_discovery.o 00:01:47.157 CC lib/ftl/ftl_core.o 00:01:47.157 CC lib/scsi/port.o 00:01:47.157 CC lib/nvmf/ctrlr_bdev.o 00:01:47.157 CC lib/ftl/ftl_init.o 00:01:47.157 CC lib/nvmf/subsystem.o 00:01:47.157 CC lib/scsi/scsi.o 00:01:47.157 CC lib/ftl/ftl_layout.o 00:01:47.157 CC lib/scsi/scsi_bdev.o 00:01:47.157 CC lib/nvmf/nvmf.o 00:01:47.157 CC lib/ftl/ftl_debug.o 00:01:47.157 CC lib/nvmf/nvmf_rpc.o 00:01:47.157 CC lib/nvmf/transport.o 00:01:47.157 CC lib/scsi/scsi_pr.o 00:01:47.157 CC lib/ftl/ftl_io.o 00:01:47.157 CC lib/scsi/scsi_rpc.o 00:01:47.157 CC lib/nvmf/tcp.o 00:01:47.157 CC lib/ftl/ftl_sb.o 00:01:47.157 CC lib/ftl/ftl_l2p.o 00:01:47.157 CC lib/scsi/task.o 00:01:47.157 CC lib/nvmf/mdns_server.o 00:01:47.157 CC lib/nvmf/stubs.o 00:01:47.157 CC lib/ftl/ftl_l2p_flat.o 00:01:47.157 CC lib/ftl/ftl_nv_cache.o 00:01:47.157 CC lib/nvmf/vfio_user.o 00:01:47.157 CC lib/ftl/ftl_band_ops.o 00:01:47.157 CC lib/ftl/ftl_band.o 00:01:47.157 CC lib/nvmf/rdma.o 00:01:47.157 CC lib/nvmf/auth.o 00:01:47.157 CC lib/ftl/ftl_writer.o 00:01:47.157 CC lib/ftl/ftl_rq.o 00:01:47.157 CC lib/ftl/ftl_reloc.o 00:01:47.157 CC lib/ftl/ftl_l2p_cache.o 00:01:47.157 CC lib/ftl/ftl_p2l.o 00:01:47.157 CC lib/ftl/mngt/ftl_mngt.o 00:01:47.157 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:47.157 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:47.157 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:47.157 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:47.157 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:47.157 SO libspdk_lvol.so.10.0 00:01:47.418 SYMLINK libspdk_lvol.so 00:01:47.418 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:47.683 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:47.683 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:47.683 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:47.683 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:47.683 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:47.683 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:47.683 CC lib/ftl/utils/ftl_conf.o 00:01:47.683 CC lib/ftl/utils/ftl_md.o 00:01:47.683 CC lib/ftl/utils/ftl_mempool.o 00:01:47.683 CC lib/ftl/utils/ftl_bitmap.o 00:01:47.683 CC lib/ftl/utils/ftl_property.o 00:01:47.683 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:47.683 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:47.683 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:47.683 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:47.683 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:47.683 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:47.683 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:47.683 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:47.944 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:47.944 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:47.944 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:47.944 CC lib/ftl/base/ftl_base_dev.o 00:01:47.944 CC lib/ftl/base/ftl_base_bdev.o 00:01:47.944 CC lib/ftl/ftl_trace.o 00:01:47.944 LIB libspdk_nbd.a 00:01:48.202 SO libspdk_nbd.so.7.0 00:01:48.202 SYMLINK libspdk_nbd.so 00:01:48.202 LIB libspdk_scsi.a 00:01:48.202 SO libspdk_scsi.so.9.0 00:01:48.202 LIB libspdk_ublk.a 00:01:48.460 SO libspdk_ublk.so.3.0 00:01:48.460 SYMLINK libspdk_scsi.so 00:01:48.460 SYMLINK libspdk_ublk.so 00:01:48.460 CC lib/vhost/vhost.o 00:01:48.460 CC lib/iscsi/conn.o 00:01:48.460 CC lib/iscsi/init_grp.o 00:01:48.460 CC lib/vhost/vhost_rpc.o 00:01:48.460 CC lib/vhost/vhost_scsi.o 00:01:48.460 CC lib/iscsi/iscsi.o 00:01:48.460 CC lib/vhost/vhost_blk.o 00:01:48.460 CC lib/iscsi/md5.o 00:01:48.460 CC lib/vhost/rte_vhost_user.o 00:01:48.460 CC lib/iscsi/param.o 00:01:48.460 CC lib/iscsi/portal_grp.o 00:01:48.460 CC lib/iscsi/tgt_node.o 00:01:48.460 CC lib/iscsi/iscsi_subsystem.o 00:01:48.460 CC lib/iscsi/iscsi_rpc.o 00:01:48.460 CC lib/iscsi/task.o 00:01:48.718 LIB libspdk_ftl.a 00:01:48.976 SO libspdk_ftl.so.9.0 00:01:49.234 SYMLINK libspdk_ftl.so 00:01:49.801 LIB libspdk_vhost.a 00:01:49.801 LIB libspdk_nvmf.a 00:01:49.801 SO libspdk_vhost.so.8.0 00:01:49.801 SO libspdk_nvmf.so.18.1 00:01:49.801 SYMLINK libspdk_vhost.so 00:01:50.060 LIB libspdk_iscsi.a 00:01:50.060 SO libspdk_iscsi.so.8.0 00:01:50.060 SYMLINK libspdk_nvmf.so 00:01:50.060 SYMLINK libspdk_iscsi.so 00:01:50.318 CC module/vfu_device/vfu_virtio.o 00:01:50.318 CC module/vfu_device/vfu_virtio_blk.o 00:01:50.318 CC module/env_dpdk/env_dpdk_rpc.o 00:01:50.318 CC module/vfu_device/vfu_virtio_scsi.o 00:01:50.318 CC module/vfu_device/vfu_virtio_rpc.o 00:01:50.576 CC module/scheduler/gscheduler/gscheduler.o 00:01:50.576 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:50.576 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:50.576 CC module/accel/error/accel_error.o 00:01:50.576 CC module/accel/ioat/accel_ioat.o 00:01:50.576 CC module/accel/iaa/accel_iaa.o 00:01:50.576 CC module/keyring/linux/keyring.o 00:01:50.576 CC module/blob/bdev/blob_bdev.o 00:01:50.576 CC module/sock/posix/posix.o 00:01:50.576 CC module/accel/error/accel_error_rpc.o 00:01:50.576 CC module/keyring/linux/keyring_rpc.o 00:01:50.576 CC module/accel/ioat/accel_ioat_rpc.o 00:01:50.576 CC module/accel/iaa/accel_iaa_rpc.o 00:01:50.576 CC module/keyring/file/keyring.o 00:01:50.576 CC module/keyring/file/keyring_rpc.o 00:01:50.576 CC module/accel/dsa/accel_dsa_rpc.o 00:01:50.576 CC module/accel/dsa/accel_dsa.o 00:01:50.576 LIB libspdk_env_dpdk_rpc.a 00:01:50.576 SO libspdk_env_dpdk_rpc.so.6.0 00:01:50.576 SYMLINK libspdk_env_dpdk_rpc.so 00:01:50.576 LIB libspdk_keyring_linux.a 00:01:50.576 LIB libspdk_keyring_file.a 00:01:50.576 LIB libspdk_scheduler_gscheduler.a 00:01:50.576 LIB libspdk_scheduler_dpdk_governor.a 00:01:50.834 SO libspdk_keyring_linux.so.1.0 00:01:50.834 SO libspdk_keyring_file.so.1.0 00:01:50.834 SO libspdk_scheduler_gscheduler.so.4.0 00:01:50.834 LIB libspdk_accel_error.a 00:01:50.834 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:50.834 LIB libspdk_scheduler_dynamic.a 00:01:50.834 LIB libspdk_accel_ioat.a 00:01:50.834 LIB libspdk_accel_iaa.a 00:01:50.834 SO libspdk_accel_error.so.2.0 00:01:50.834 SO libspdk_scheduler_dynamic.so.4.0 00:01:50.834 SO libspdk_accel_ioat.so.6.0 00:01:50.834 SYMLINK libspdk_keyring_linux.so 00:01:50.834 SYMLINK libspdk_keyring_file.so 00:01:50.834 SO libspdk_accel_iaa.so.3.0 00:01:50.834 SYMLINK libspdk_scheduler_gscheduler.so 00:01:50.834 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:50.834 LIB libspdk_blob_bdev.a 00:01:50.834 SYMLINK libspdk_accel_error.so 00:01:50.834 SYMLINK libspdk_scheduler_dynamic.so 00:01:50.834 LIB libspdk_accel_dsa.a 00:01:50.834 SYMLINK libspdk_accel_ioat.so 00:01:50.834 SYMLINK libspdk_accel_iaa.so 00:01:50.834 SO libspdk_blob_bdev.so.11.0 00:01:50.834 SO libspdk_accel_dsa.so.5.0 00:01:50.834 SYMLINK libspdk_blob_bdev.so 00:01:50.834 SYMLINK libspdk_accel_dsa.so 00:01:51.095 LIB libspdk_vfu_device.a 00:01:51.095 SO libspdk_vfu_device.so.3.0 00:01:51.095 CC module/bdev/error/vbdev_error.o 00:01:51.095 CC module/bdev/error/vbdev_error_rpc.o 00:01:51.095 CC module/bdev/malloc/bdev_malloc.o 00:01:51.095 CC module/bdev/lvol/vbdev_lvol.o 00:01:51.095 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:51.095 CC module/bdev/delay/vbdev_delay.o 00:01:51.095 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:51.095 CC module/bdev/raid/bdev_raid.o 00:01:51.095 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:51.095 CC module/bdev/passthru/vbdev_passthru.o 00:01:51.095 CC module/bdev/nvme/bdev_nvme.o 00:01:51.095 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:51.095 CC module/bdev/raid/bdev_raid_rpc.o 00:01:51.095 CC module/bdev/ftl/bdev_ftl.o 00:01:51.095 CC module/bdev/raid/bdev_raid_sb.o 00:01:51.095 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:51.095 CC module/bdev/nvme/nvme_rpc.o 00:01:51.095 CC module/bdev/aio/bdev_aio.o 00:01:51.095 CC module/bdev/raid/raid0.o 00:01:51.095 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:51.095 CC module/bdev/aio/bdev_aio_rpc.o 00:01:51.095 CC module/bdev/gpt/gpt.o 00:01:51.095 CC module/bdev/nvme/bdev_mdns_client.o 00:01:51.095 CC module/bdev/raid/raid1.o 00:01:51.095 CC module/bdev/gpt/vbdev_gpt.o 00:01:51.095 CC module/bdev/nvme/vbdev_opal.o 00:01:51.095 CC module/bdev/null/bdev_null.o 00:01:51.095 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:51.095 CC module/bdev/raid/concat.o 00:01:51.095 CC module/bdev/null/bdev_null_rpc.o 00:01:51.095 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:51.095 CC module/bdev/split/vbdev_split.o 00:01:51.095 CC module/bdev/iscsi/bdev_iscsi.o 00:01:51.095 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:51.095 CC module/bdev/split/vbdev_split_rpc.o 00:01:51.095 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:51.095 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:51.095 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:51.095 CC module/blobfs/bdev/blobfs_bdev.o 00:01:51.095 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:51.095 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:51.095 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:51.354 SYMLINK libspdk_vfu_device.so 00:01:51.354 LIB libspdk_sock_posix.a 00:01:51.354 SO libspdk_sock_posix.so.6.0 00:01:51.613 LIB libspdk_blobfs_bdev.a 00:01:51.613 SO libspdk_blobfs_bdev.so.6.0 00:01:51.613 LIB libspdk_bdev_split.a 00:01:51.613 SYMLINK libspdk_sock_posix.so 00:01:51.613 SO libspdk_bdev_split.so.6.0 00:01:51.613 LIB libspdk_bdev_error.a 00:01:51.613 LIB libspdk_bdev_null.a 00:01:51.613 SYMLINK libspdk_blobfs_bdev.so 00:01:51.613 LIB libspdk_bdev_gpt.a 00:01:51.613 LIB libspdk_bdev_ftl.a 00:01:51.613 SO libspdk_bdev_error.so.6.0 00:01:51.613 SO libspdk_bdev_null.so.6.0 00:01:51.613 LIB libspdk_bdev_iscsi.a 00:01:51.613 SYMLINK libspdk_bdev_split.so 00:01:51.613 SO libspdk_bdev_gpt.so.6.0 00:01:51.613 SO libspdk_bdev_ftl.so.6.0 00:01:51.613 SO libspdk_bdev_iscsi.so.6.0 00:01:51.613 LIB libspdk_bdev_aio.a 00:01:51.613 LIB libspdk_bdev_zone_block.a 00:01:51.613 SYMLINK libspdk_bdev_error.so 00:01:51.613 SYMLINK libspdk_bdev_null.so 00:01:51.613 LIB libspdk_bdev_malloc.a 00:01:51.613 SO libspdk_bdev_zone_block.so.6.0 00:01:51.613 SO libspdk_bdev_aio.so.6.0 00:01:51.613 LIB libspdk_bdev_passthru.a 00:01:51.613 SYMLINK libspdk_bdev_ftl.so 00:01:51.613 SYMLINK libspdk_bdev_gpt.so 00:01:51.871 SYMLINK libspdk_bdev_iscsi.so 00:01:51.871 SO libspdk_bdev_malloc.so.6.0 00:01:51.871 SO libspdk_bdev_passthru.so.6.0 00:01:51.871 SYMLINK libspdk_bdev_zone_block.so 00:01:51.871 SYMLINK libspdk_bdev_aio.so 00:01:51.871 LIB libspdk_bdev_delay.a 00:01:51.871 SYMLINK libspdk_bdev_malloc.so 00:01:51.871 SYMLINK libspdk_bdev_passthru.so 00:01:51.871 SO libspdk_bdev_delay.so.6.0 00:01:51.871 LIB libspdk_bdev_lvol.a 00:01:51.871 SYMLINK libspdk_bdev_delay.so 00:01:51.871 SO libspdk_bdev_lvol.so.6.0 00:01:51.871 LIB libspdk_bdev_virtio.a 00:01:51.871 SYMLINK libspdk_bdev_lvol.so 00:01:51.871 SO libspdk_bdev_virtio.so.6.0 00:01:52.129 SYMLINK libspdk_bdev_virtio.so 00:01:52.387 LIB libspdk_bdev_raid.a 00:01:52.387 SO libspdk_bdev_raid.so.6.0 00:01:52.387 SYMLINK libspdk_bdev_raid.so 00:01:53.760 LIB libspdk_bdev_nvme.a 00:01:53.760 SO libspdk_bdev_nvme.so.7.0 00:01:53.760 SYMLINK libspdk_bdev_nvme.so 00:01:54.019 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:54.019 CC module/event/subsystems/scheduler/scheduler.o 00:01:54.019 CC module/event/subsystems/keyring/keyring.o 00:01:54.019 CC module/event/subsystems/sock/sock.o 00:01:54.019 CC module/event/subsystems/iobuf/iobuf.o 00:01:54.019 CC module/event/subsystems/vmd/vmd.o 00:01:54.019 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:54.019 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:54.019 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:54.019 LIB libspdk_event_keyring.a 00:01:54.019 LIB libspdk_event_scheduler.a 00:01:54.019 LIB libspdk_event_vfu_tgt.a 00:01:54.019 LIB libspdk_event_vhost_blk.a 00:01:54.277 LIB libspdk_event_vmd.a 00:01:54.277 LIB libspdk_event_sock.a 00:01:54.277 SO libspdk_event_keyring.so.1.0 00:01:54.277 LIB libspdk_event_iobuf.a 00:01:54.277 SO libspdk_event_vhost_blk.so.3.0 00:01:54.277 SO libspdk_event_vfu_tgt.so.3.0 00:01:54.277 SO libspdk_event_scheduler.so.4.0 00:01:54.277 SO libspdk_event_sock.so.5.0 00:01:54.277 SO libspdk_event_vmd.so.6.0 00:01:54.277 SO libspdk_event_iobuf.so.3.0 00:01:54.277 SYMLINK libspdk_event_keyring.so 00:01:54.277 SYMLINK libspdk_event_vhost_blk.so 00:01:54.277 SYMLINK libspdk_event_vfu_tgt.so 00:01:54.277 SYMLINK libspdk_event_scheduler.so 00:01:54.277 SYMLINK libspdk_event_sock.so 00:01:54.277 SYMLINK libspdk_event_vmd.so 00:01:54.277 SYMLINK libspdk_event_iobuf.so 00:01:54.536 CC module/event/subsystems/accel/accel.o 00:01:54.536 LIB libspdk_event_accel.a 00:01:54.536 SO libspdk_event_accel.so.6.0 00:01:54.536 SYMLINK libspdk_event_accel.so 00:01:54.795 CC module/event/subsystems/bdev/bdev.o 00:01:55.053 LIB libspdk_event_bdev.a 00:01:55.053 SO libspdk_event_bdev.so.6.0 00:01:55.053 SYMLINK libspdk_event_bdev.so 00:01:55.312 CC module/event/subsystems/scsi/scsi.o 00:01:55.312 CC module/event/subsystems/nbd/nbd.o 00:01:55.312 CC module/event/subsystems/ublk/ublk.o 00:01:55.312 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:55.312 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:55.312 LIB libspdk_event_nbd.a 00:01:55.312 LIB libspdk_event_ublk.a 00:01:55.312 LIB libspdk_event_scsi.a 00:01:55.312 SO libspdk_event_nbd.so.6.0 00:01:55.312 SO libspdk_event_ublk.so.3.0 00:01:55.312 SO libspdk_event_scsi.so.6.0 00:01:55.312 SYMLINK libspdk_event_nbd.so 00:01:55.312 SYMLINK libspdk_event_ublk.so 00:01:55.570 SYMLINK libspdk_event_scsi.so 00:01:55.570 LIB libspdk_event_nvmf.a 00:01:55.570 SO libspdk_event_nvmf.so.6.0 00:01:55.570 SYMLINK libspdk_event_nvmf.so 00:01:55.570 CC module/event/subsystems/iscsi/iscsi.o 00:01:55.570 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:55.829 LIB libspdk_event_vhost_scsi.a 00:01:55.829 LIB libspdk_event_iscsi.a 00:01:55.829 SO libspdk_event_vhost_scsi.so.3.0 00:01:55.829 SO libspdk_event_iscsi.so.6.0 00:01:55.829 SYMLINK libspdk_event_vhost_scsi.so 00:01:55.829 SYMLINK libspdk_event_iscsi.so 00:01:56.088 SO libspdk.so.6.0 00:01:56.088 SYMLINK libspdk.so 00:01:56.088 CC test/rpc_client/rpc_client_test.o 00:01:56.088 TEST_HEADER include/spdk/accel.h 00:01:56.088 CC app/trace_record/trace_record.o 00:01:56.088 TEST_HEADER include/spdk/accel_module.h 00:01:56.088 TEST_HEADER include/spdk/assert.h 00:01:56.088 TEST_HEADER include/spdk/barrier.h 00:01:56.089 CXX app/trace/trace.o 00:01:56.089 TEST_HEADER include/spdk/base64.h 00:01:56.089 TEST_HEADER include/spdk/bdev.h 00:01:56.089 TEST_HEADER include/spdk/bdev_module.h 00:01:56.089 TEST_HEADER include/spdk/bdev_zone.h 00:01:56.089 TEST_HEADER include/spdk/bit_array.h 00:01:56.089 TEST_HEADER include/spdk/bit_pool.h 00:01:56.089 CC app/spdk_nvme_identify/identify.o 00:01:56.089 TEST_HEADER include/spdk/blob_bdev.h 00:01:56.089 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:56.089 TEST_HEADER include/spdk/blobfs.h 00:01:56.089 TEST_HEADER include/spdk/blob.h 00:01:56.089 CC app/spdk_top/spdk_top.o 00:01:56.089 CC app/spdk_nvme_perf/perf.o 00:01:56.089 TEST_HEADER include/spdk/conf.h 00:01:56.089 CC app/spdk_lspci/spdk_lspci.o 00:01:56.089 TEST_HEADER include/spdk/config.h 00:01:56.089 TEST_HEADER include/spdk/crc16.h 00:01:56.089 CC app/spdk_nvme_discover/discovery_aer.o 00:01:56.089 TEST_HEADER include/spdk/cpuset.h 00:01:56.089 TEST_HEADER include/spdk/crc32.h 00:01:56.089 TEST_HEADER include/spdk/crc64.h 00:01:56.089 TEST_HEADER include/spdk/dif.h 00:01:56.089 TEST_HEADER include/spdk/dma.h 00:01:56.089 TEST_HEADER include/spdk/endian.h 00:01:56.089 TEST_HEADER include/spdk/env_dpdk.h 00:01:56.089 TEST_HEADER include/spdk/env.h 00:01:56.089 TEST_HEADER include/spdk/event.h 00:01:56.089 TEST_HEADER include/spdk/fd_group.h 00:01:56.089 TEST_HEADER include/spdk/fd.h 00:01:56.089 TEST_HEADER include/spdk/file.h 00:01:56.089 TEST_HEADER include/spdk/ftl.h 00:01:56.089 TEST_HEADER include/spdk/gpt_spec.h 00:01:56.089 TEST_HEADER include/spdk/hexlify.h 00:01:56.089 TEST_HEADER include/spdk/histogram_data.h 00:01:56.089 TEST_HEADER include/spdk/idxd.h 00:01:56.089 TEST_HEADER include/spdk/idxd_spec.h 00:01:56.089 TEST_HEADER include/spdk/init.h 00:01:56.089 TEST_HEADER include/spdk/ioat.h 00:01:56.089 TEST_HEADER include/spdk/ioat_spec.h 00:01:56.089 TEST_HEADER include/spdk/iscsi_spec.h 00:01:56.089 TEST_HEADER include/spdk/json.h 00:01:56.089 TEST_HEADER include/spdk/jsonrpc.h 00:01:56.089 TEST_HEADER include/spdk/keyring.h 00:01:56.089 TEST_HEADER include/spdk/keyring_module.h 00:01:56.089 TEST_HEADER include/spdk/likely.h 00:01:56.089 TEST_HEADER include/spdk/lvol.h 00:01:56.089 TEST_HEADER include/spdk/log.h 00:01:56.089 TEST_HEADER include/spdk/mmio.h 00:01:56.089 TEST_HEADER include/spdk/memory.h 00:01:56.089 TEST_HEADER include/spdk/nbd.h 00:01:56.089 TEST_HEADER include/spdk/notify.h 00:01:56.089 TEST_HEADER include/spdk/nvme.h 00:01:56.089 TEST_HEADER include/spdk/nvme_intel.h 00:01:56.089 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:56.089 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:56.089 TEST_HEADER include/spdk/nvme_spec.h 00:01:56.089 TEST_HEADER include/spdk/nvme_zns.h 00:01:56.089 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:56.089 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:56.089 TEST_HEADER include/spdk/nvmf.h 00:01:56.089 TEST_HEADER include/spdk/nvmf_spec.h 00:01:56.089 TEST_HEADER include/spdk/nvmf_transport.h 00:01:56.089 TEST_HEADER include/spdk/opal.h 00:01:56.089 TEST_HEADER include/spdk/opal_spec.h 00:01:56.089 TEST_HEADER include/spdk/pci_ids.h 00:01:56.351 TEST_HEADER include/spdk/pipe.h 00:01:56.351 TEST_HEADER include/spdk/queue.h 00:01:56.351 TEST_HEADER include/spdk/reduce.h 00:01:56.351 TEST_HEADER include/spdk/scheduler.h 00:01:56.351 TEST_HEADER include/spdk/rpc.h 00:01:56.351 TEST_HEADER include/spdk/scsi.h 00:01:56.351 TEST_HEADER include/spdk/scsi_spec.h 00:01:56.351 TEST_HEADER include/spdk/sock.h 00:01:56.351 TEST_HEADER include/spdk/stdinc.h 00:01:56.351 TEST_HEADER include/spdk/string.h 00:01:56.351 TEST_HEADER include/spdk/thread.h 00:01:56.351 TEST_HEADER include/spdk/trace.h 00:01:56.351 TEST_HEADER include/spdk/trace_parser.h 00:01:56.351 TEST_HEADER include/spdk/tree.h 00:01:56.351 TEST_HEADER include/spdk/ublk.h 00:01:56.351 TEST_HEADER include/spdk/util.h 00:01:56.351 TEST_HEADER include/spdk/version.h 00:01:56.351 TEST_HEADER include/spdk/uuid.h 00:01:56.351 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:56.351 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:56.351 TEST_HEADER include/spdk/vhost.h 00:01:56.351 TEST_HEADER include/spdk/vmd.h 00:01:56.351 TEST_HEADER include/spdk/xor.h 00:01:56.351 TEST_HEADER include/spdk/zipf.h 00:01:56.351 CXX test/cpp_headers/accel.o 00:01:56.351 CXX test/cpp_headers/accel_module.o 00:01:56.351 CXX test/cpp_headers/assert.o 00:01:56.351 CXX test/cpp_headers/barrier.o 00:01:56.351 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:56.351 CXX test/cpp_headers/base64.o 00:01:56.351 CXX test/cpp_headers/bdev.o 00:01:56.351 CC app/spdk_dd/spdk_dd.o 00:01:56.351 CXX test/cpp_headers/bdev_module.o 00:01:56.351 CXX test/cpp_headers/bdev_zone.o 00:01:56.351 CXX test/cpp_headers/bit_array.o 00:01:56.351 CXX test/cpp_headers/bit_pool.o 00:01:56.351 CXX test/cpp_headers/blob_bdev.o 00:01:56.351 CXX test/cpp_headers/blobfs_bdev.o 00:01:56.351 CXX test/cpp_headers/blobfs.o 00:01:56.351 CXX test/cpp_headers/blob.o 00:01:56.351 CXX test/cpp_headers/conf.o 00:01:56.351 CXX test/cpp_headers/config.o 00:01:56.351 CXX test/cpp_headers/cpuset.o 00:01:56.351 CXX test/cpp_headers/crc16.o 00:01:56.351 CC app/nvmf_tgt/nvmf_main.o 00:01:56.351 CC app/iscsi_tgt/iscsi_tgt.o 00:01:56.351 CXX test/cpp_headers/crc32.o 00:01:56.351 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:56.351 CC examples/util/zipf/zipf.o 00:01:56.351 CC test/app/jsoncat/jsoncat.o 00:01:56.351 CC test/env/memory/memory_ut.o 00:01:56.351 CC test/app/stub/stub.o 00:01:56.351 CC examples/ioat/verify/verify.o 00:01:56.351 CC test/app/histogram_perf/histogram_perf.o 00:01:56.351 CC examples/ioat/perf/perf.o 00:01:56.352 CC test/env/vtophys/vtophys.o 00:01:56.352 CC test/env/pci/pci_ut.o 00:01:56.352 CC app/spdk_tgt/spdk_tgt.o 00:01:56.352 CC test/thread/poller_perf/poller_perf.o 00:01:56.352 CC app/fio/nvme/fio_plugin.o 00:01:56.352 CC test/dma/test_dma/test_dma.o 00:01:56.352 CC test/app/bdev_svc/bdev_svc.o 00:01:56.352 CC app/fio/bdev/fio_plugin.o 00:01:56.615 LINK spdk_lspci 00:01:56.615 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:56.615 CC test/env/mem_callbacks/mem_callbacks.o 00:01:56.615 LINK rpc_client_test 00:01:56.615 LINK interrupt_tgt 00:01:56.615 LINK jsoncat 00:01:56.615 CXX test/cpp_headers/crc64.o 00:01:56.615 CXX test/cpp_headers/dif.o 00:01:56.615 LINK spdk_nvme_discover 00:01:56.615 LINK vtophys 00:01:56.615 CXX test/cpp_headers/dma.o 00:01:56.615 CXX test/cpp_headers/endian.o 00:01:56.615 LINK nvmf_tgt 00:01:56.615 CXX test/cpp_headers/env_dpdk.o 00:01:56.615 LINK zipf 00:01:56.615 CXX test/cpp_headers/env.o 00:01:56.615 CXX test/cpp_headers/event.o 00:01:56.615 CXX test/cpp_headers/fd_group.o 00:01:56.615 LINK poller_perf 00:01:56.615 LINK histogram_perf 00:01:56.615 CXX test/cpp_headers/fd.o 00:01:56.615 LINK env_dpdk_post_init 00:01:56.615 LINK iscsi_tgt 00:01:56.615 LINK stub 00:01:56.615 CXX test/cpp_headers/file.o 00:01:56.615 CXX test/cpp_headers/ftl.o 00:01:56.615 CXX test/cpp_headers/gpt_spec.o 00:01:56.875 LINK spdk_trace_record 00:01:56.875 LINK verify 00:01:56.875 CXX test/cpp_headers/hexlify.o 00:01:56.875 CXX test/cpp_headers/histogram_data.o 00:01:56.875 CXX test/cpp_headers/idxd.o 00:01:56.875 LINK ioat_perf 00:01:56.875 CXX test/cpp_headers/idxd_spec.o 00:01:56.875 LINK spdk_tgt 00:01:56.875 LINK bdev_svc 00:01:56.875 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:56.875 CXX test/cpp_headers/init.o 00:01:56.875 CXX test/cpp_headers/ioat.o 00:01:56.875 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:57.141 LINK spdk_dd 00:01:57.141 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:57.141 CXX test/cpp_headers/ioat_spec.o 00:01:57.141 CXX test/cpp_headers/iscsi_spec.o 00:01:57.141 CXX test/cpp_headers/json.o 00:01:57.141 CXX test/cpp_headers/jsonrpc.o 00:01:57.141 CXX test/cpp_headers/keyring.o 00:01:57.141 CXX test/cpp_headers/keyring_module.o 00:01:57.141 CXX test/cpp_headers/likely.o 00:01:57.141 CXX test/cpp_headers/log.o 00:01:57.141 CXX test/cpp_headers/lvol.o 00:01:57.141 CXX test/cpp_headers/memory.o 00:01:57.141 CXX test/cpp_headers/mmio.o 00:01:57.141 LINK spdk_trace 00:01:57.141 CXX test/cpp_headers/nbd.o 00:01:57.141 CXX test/cpp_headers/notify.o 00:01:57.141 CXX test/cpp_headers/nvme.o 00:01:57.141 CXX test/cpp_headers/nvme_intel.o 00:01:57.141 CXX test/cpp_headers/nvme_ocssd.o 00:01:57.141 LINK test_dma 00:01:57.141 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:57.141 CXX test/cpp_headers/nvme_spec.o 00:01:57.141 CXX test/cpp_headers/nvme_zns.o 00:01:57.141 LINK pci_ut 00:01:57.141 CXX test/cpp_headers/nvmf_cmd.o 00:01:57.141 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:57.141 CXX test/cpp_headers/nvmf.o 00:01:57.141 CXX test/cpp_headers/nvmf_spec.o 00:01:57.141 CXX test/cpp_headers/nvmf_transport.o 00:01:57.141 CXX test/cpp_headers/opal.o 00:01:57.415 CXX test/cpp_headers/opal_spec.o 00:01:57.415 LINK nvme_fuzz 00:01:57.415 CC examples/vmd/lsvmd/lsvmd.o 00:01:57.415 CC test/event/event_perf/event_perf.o 00:01:57.415 CC examples/vmd/led/led.o 00:01:57.415 CC examples/sock/hello_world/hello_sock.o 00:01:57.415 CC test/event/reactor/reactor.o 00:01:57.415 CC examples/idxd/perf/perf.o 00:01:57.415 LINK spdk_bdev 00:01:57.415 CXX test/cpp_headers/pci_ids.o 00:01:57.415 CC test/event/reactor_perf/reactor_perf.o 00:01:57.415 CXX test/cpp_headers/pipe.o 00:01:57.415 CC examples/thread/thread/thread_ex.o 00:01:57.415 CXX test/cpp_headers/queue.o 00:01:57.415 CXX test/cpp_headers/reduce.o 00:01:57.415 CXX test/cpp_headers/rpc.o 00:01:57.415 CXX test/cpp_headers/scheduler.o 00:01:57.415 CC test/event/app_repeat/app_repeat.o 00:01:57.415 CXX test/cpp_headers/scsi.o 00:01:57.681 LINK spdk_nvme 00:01:57.681 CXX test/cpp_headers/scsi_spec.o 00:01:57.681 CXX test/cpp_headers/sock.o 00:01:57.681 CXX test/cpp_headers/stdinc.o 00:01:57.681 CXX test/cpp_headers/string.o 00:01:57.681 CXX test/cpp_headers/thread.o 00:01:57.681 CXX test/cpp_headers/trace.o 00:01:57.681 CXX test/cpp_headers/trace_parser.o 00:01:57.681 CXX test/cpp_headers/tree.o 00:01:57.681 CC test/event/scheduler/scheduler.o 00:01:57.681 CXX test/cpp_headers/ublk.o 00:01:57.681 CXX test/cpp_headers/util.o 00:01:57.681 CXX test/cpp_headers/uuid.o 00:01:57.681 CXX test/cpp_headers/version.o 00:01:57.681 CXX test/cpp_headers/vfio_user_pci.o 00:01:57.681 LINK lsvmd 00:01:57.681 CXX test/cpp_headers/vfio_user_spec.o 00:01:57.681 CXX test/cpp_headers/vhost.o 00:01:57.681 LINK led 00:01:57.681 CXX test/cpp_headers/vmd.o 00:01:57.681 LINK event_perf 00:01:57.681 CXX test/cpp_headers/xor.o 00:01:57.681 CXX test/cpp_headers/zipf.o 00:01:57.681 LINK reactor 00:01:57.681 LINK spdk_nvme_perf 00:01:57.681 LINK reactor_perf 00:01:57.681 CC app/vhost/vhost.o 00:01:57.681 LINK mem_callbacks 00:01:57.941 LINK vhost_fuzz 00:01:57.941 LINK hello_sock 00:01:57.941 LINK app_repeat 00:01:57.941 LINK spdk_top 00:01:57.941 LINK spdk_nvme_identify 00:01:57.941 CC test/nvme/e2edp/nvme_dp.o 00:01:57.941 CC test/nvme/err_injection/err_injection.o 00:01:57.941 CC test/nvme/overhead/overhead.o 00:01:57.941 CC test/nvme/reset/reset.o 00:01:57.941 LINK thread 00:01:57.941 CC test/nvme/aer/aer.o 00:01:57.941 CC test/nvme/sgl/sgl.o 00:01:57.941 CC test/blobfs/mkfs/mkfs.o 00:01:57.941 CC test/nvme/startup/startup.o 00:01:57.941 CC test/nvme/reserve/reserve.o 00:01:57.941 CC test/accel/dif/dif.o 00:01:57.941 CC test/nvme/simple_copy/simple_copy.o 00:01:58.202 LINK scheduler 00:01:58.202 LINK idxd_perf 00:01:58.202 CC test/nvme/cuse/cuse.o 00:01:58.202 CC test/nvme/fused_ordering/fused_ordering.o 00:01:58.202 CC test/nvme/boot_partition/boot_partition.o 00:01:58.202 CC test/lvol/esnap/esnap.o 00:01:58.202 CC test/nvme/connect_stress/connect_stress.o 00:01:58.202 CC test/nvme/fdp/fdp.o 00:01:58.202 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:58.202 CC test/nvme/compliance/nvme_compliance.o 00:01:58.202 LINK vhost 00:01:58.202 LINK startup 00:01:58.202 LINK err_injection 00:01:58.202 LINK reserve 00:01:58.202 LINK boot_partition 00:01:58.460 LINK reset 00:01:58.460 LINK connect_stress 00:01:58.460 LINK mkfs 00:01:58.460 LINK fused_ordering 00:01:58.460 LINK overhead 00:01:58.460 CC examples/nvme/reconnect/reconnect.o 00:01:58.460 CC examples/nvme/abort/abort.o 00:01:58.460 CC examples/nvme/hello_world/hello_world.o 00:01:58.460 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:58.460 CC examples/nvme/hotplug/hotplug.o 00:01:58.460 CC examples/nvme/arbitration/arbitration.o 00:01:58.460 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:58.460 LINK aer 00:01:58.460 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:58.460 LINK sgl 00:01:58.460 LINK nvme_dp 00:01:58.460 LINK memory_ut 00:01:58.460 LINK doorbell_aers 00:01:58.460 LINK simple_copy 00:01:58.460 LINK fdp 00:01:58.717 CC examples/accel/perf/accel_perf.o 00:01:58.717 LINK dif 00:01:58.717 CC examples/blob/cli/blobcli.o 00:01:58.717 LINK nvme_compliance 00:01:58.717 LINK pmr_persistence 00:01:58.717 CC examples/blob/hello_world/hello_blob.o 00:01:58.717 LINK hotplug 00:01:58.717 LINK hello_world 00:01:58.717 LINK cmb_copy 00:01:58.717 LINK abort 00:01:58.976 LINK arbitration 00:01:58.976 LINK reconnect 00:01:58.976 LINK hello_blob 00:01:58.976 LINK nvme_manage 00:01:58.976 CC test/bdev/bdevio/bdevio.o 00:01:58.976 LINK accel_perf 00:01:59.234 LINK blobcli 00:01:59.234 LINK iscsi_fuzz 00:01:59.492 LINK bdevio 00:01:59.493 CC examples/bdev/hello_world/hello_bdev.o 00:01:59.493 CC examples/bdev/bdevperf/bdevperf.o 00:01:59.750 LINK cuse 00:01:59.750 LINK hello_bdev 00:02:00.380 LINK bdevperf 00:02:00.638 CC examples/nvmf/nvmf/nvmf.o 00:02:00.896 LINK nvmf 00:02:03.430 LINK esnap 00:02:03.430 00:02:03.430 real 0m49.212s 00:02:03.430 user 10m10.617s 00:02:03.430 sys 2m28.800s 00:02:03.430 15:54:49 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:03.430 15:54:49 make -- common/autotest_common.sh@10 -- $ set +x 00:02:03.430 ************************************ 00:02:03.430 END TEST make 00:02:03.430 ************************************ 00:02:03.430 15:54:49 -- common/autotest_common.sh@1142 -- $ return 0 00:02:03.430 15:54:49 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:03.430 15:54:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:03.430 15:54:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:03.430 15:54:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.430 15:54:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:03.430 15:54:49 -- pm/common@44 -- $ pid=573796 00:02:03.430 15:54:49 -- pm/common@50 -- $ kill -TERM 573796 00:02:03.430 15:54:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.430 15:54:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:03.430 15:54:49 -- pm/common@44 -- $ pid=573798 00:02:03.430 15:54:49 -- pm/common@50 -- $ kill -TERM 573798 00:02:03.430 15:54:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.430 15:54:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:03.430 15:54:49 -- pm/common@44 -- $ pid=573800 00:02:03.430 15:54:49 -- pm/common@50 -- $ kill -TERM 573800 00:02:03.431 15:54:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.431 15:54:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:03.431 15:54:49 -- pm/common@44 -- $ pid=573827 00:02:03.431 15:54:49 -- pm/common@50 -- $ sudo -E kill -TERM 573827 00:02:03.431 15:54:49 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:03.431 15:54:49 -- nvmf/common.sh@7 -- # uname -s 00:02:03.431 15:54:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:03.431 15:54:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:03.431 15:54:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:03.431 15:54:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:03.431 15:54:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:03.431 15:54:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:03.431 15:54:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:03.431 15:54:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:03.431 15:54:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:03.431 15:54:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:03.431 15:54:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:02:03.431 15:54:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:02:03.431 15:54:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:03.431 15:54:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:03.431 15:54:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:03.431 15:54:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:03.431 15:54:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:03.431 15:54:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:03.431 15:54:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:03.431 15:54:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:03.431 15:54:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.431 15:54:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.431 15:54:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.431 15:54:49 -- paths/export.sh@5 -- # export PATH 00:02:03.431 15:54:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.431 15:54:49 -- nvmf/common.sh@47 -- # : 0 00:02:03.431 15:54:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:03.431 15:54:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:03.431 15:54:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:03.431 15:54:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:03.431 15:54:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:03.431 15:54:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:03.431 15:54:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:03.431 15:54:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:03.431 15:54:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:03.431 15:54:49 -- spdk/autotest.sh@32 -- # uname -s 00:02:03.431 15:54:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:03.431 15:54:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:03.431 15:54:49 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:03.431 15:54:49 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:03.431 15:54:49 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:03.431 15:54:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:03.431 15:54:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:03.431 15:54:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:03.431 15:54:49 -- spdk/autotest.sh@48 -- # udevadm_pid=629906 00:02:03.431 15:54:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:03.431 15:54:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:03.431 15:54:49 -- pm/common@17 -- # local monitor 00:02:03.431 15:54:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.431 15:54:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.431 15:54:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.431 15:54:49 -- pm/common@21 -- # date +%s 00:02:03.431 15:54:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.431 15:54:49 -- pm/common@21 -- # date +%s 00:02:03.431 15:54:49 -- pm/common@25 -- # sleep 1 00:02:03.431 15:54:49 -- pm/common@21 -- # date +%s 00:02:03.431 15:54:49 -- pm/common@21 -- # date +%s 00:02:03.431 15:54:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721051689 00:02:03.431 15:54:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721051689 00:02:03.431 15:54:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721051689 00:02:03.431 15:54:49 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721051689 00:02:03.431 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721051689_collect-vmstat.pm.log 00:02:03.431 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721051689_collect-cpu-load.pm.log 00:02:03.431 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721051689_collect-cpu-temp.pm.log 00:02:03.431 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721051689_collect-bmc-pm.bmc.pm.log 00:02:04.808 15:54:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:04.808 15:54:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:04.808 15:54:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:04.808 15:54:50 -- common/autotest_common.sh@10 -- # set +x 00:02:04.808 15:54:50 -- spdk/autotest.sh@59 -- # create_test_list 00:02:04.808 15:54:50 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:04.808 15:54:50 -- common/autotest_common.sh@10 -- # set +x 00:02:04.808 15:54:50 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:04.808 15:54:50 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:04.808 15:54:50 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:04.808 15:54:50 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:04.808 15:54:50 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:04.808 15:54:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:04.808 15:54:50 -- common/autotest_common.sh@1455 -- # uname 00:02:04.808 15:54:50 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:04.808 15:54:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:04.808 15:54:50 -- common/autotest_common.sh@1475 -- # uname 00:02:04.808 15:54:50 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:04.808 15:54:50 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:04.808 15:54:50 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:04.808 15:54:50 -- spdk/autotest.sh@72 -- # hash lcov 00:02:04.808 15:54:50 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:04.808 15:54:50 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:04.808 --rc lcov_branch_coverage=1 00:02:04.808 --rc lcov_function_coverage=1 00:02:04.808 --rc genhtml_branch_coverage=1 00:02:04.808 --rc genhtml_function_coverage=1 00:02:04.808 --rc genhtml_legend=1 00:02:04.808 --rc geninfo_all_blocks=1 00:02:04.808 ' 00:02:04.808 15:54:50 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:04.808 --rc lcov_branch_coverage=1 00:02:04.808 --rc lcov_function_coverage=1 00:02:04.808 --rc genhtml_branch_coverage=1 00:02:04.808 --rc genhtml_function_coverage=1 00:02:04.808 --rc genhtml_legend=1 00:02:04.808 --rc geninfo_all_blocks=1 00:02:04.808 ' 00:02:04.808 15:54:50 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:04.808 --rc lcov_branch_coverage=1 00:02:04.808 --rc lcov_function_coverage=1 00:02:04.808 --rc genhtml_branch_coverage=1 00:02:04.808 --rc genhtml_function_coverage=1 00:02:04.808 --rc genhtml_legend=1 00:02:04.808 --rc geninfo_all_blocks=1 00:02:04.808 --no-external' 00:02:04.808 15:54:50 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:04.808 --rc lcov_branch_coverage=1 00:02:04.808 --rc lcov_function_coverage=1 00:02:04.808 --rc genhtml_branch_coverage=1 00:02:04.808 --rc genhtml_function_coverage=1 00:02:04.808 --rc genhtml_legend=1 00:02:04.808 --rc geninfo_all_blocks=1 00:02:04.808 --no-external' 00:02:04.808 15:54:50 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:04.808 lcov: LCOV version 1.14 00:02:04.808 15:54:50 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:19.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:19.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:34.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:34.540 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:34.541 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:34.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:34.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:34.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:34.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:34.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:34.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:34.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:34.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:34.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:34.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:34.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:34.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:34.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:34.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:34.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:34.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:34.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:34.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:34.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:34.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:34.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:34.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:34.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:34.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:34.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:34.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:34.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:34.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:34.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:34.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:34.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:34.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:34.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:34.542 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:38.749 15:55:24 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:38.749 15:55:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:38.749 15:55:24 -- common/autotest_common.sh@10 -- # set +x 00:02:38.749 15:55:24 -- spdk/autotest.sh@91 -- # rm -f 00:02:38.749 15:55:24 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:39.687 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:39.687 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:39.687 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:39.687 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:39.687 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:39.687 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:39.687 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:39.687 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:39.687 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:02:39.687 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:39.687 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:39.687 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:39.687 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:39.687 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:39.687 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:39.687 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:39.687 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:39.945 15:55:25 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:39.945 15:55:25 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:39.945 15:55:25 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:39.945 15:55:25 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:39.945 15:55:25 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:39.945 15:55:25 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:39.945 15:55:25 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:39.945 15:55:25 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:39.945 15:55:25 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:39.945 15:55:25 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:39.945 15:55:25 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:39.945 15:55:25 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:39.945 15:55:25 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:39.945 15:55:25 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:39.945 15:55:25 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:39.945 No valid GPT data, bailing 00:02:39.945 15:55:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:39.945 15:55:25 -- scripts/common.sh@391 -- # pt= 00:02:39.945 15:55:25 -- scripts/common.sh@392 -- # return 1 00:02:39.945 15:55:25 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:39.945 1+0 records in 00:02:39.945 1+0 records out 00:02:39.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00186538 s, 562 MB/s 00:02:39.945 15:55:25 -- spdk/autotest.sh@118 -- # sync 00:02:39.945 15:55:25 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:39.945 15:55:25 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:39.945 15:55:25 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:41.866 15:55:27 -- spdk/autotest.sh@124 -- # uname -s 00:02:41.866 15:55:27 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:41.866 15:55:27 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:41.866 15:55:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:41.866 15:55:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:41.866 15:55:27 -- common/autotest_common.sh@10 -- # set +x 00:02:41.866 ************************************ 00:02:41.866 START TEST setup.sh 00:02:41.866 ************************************ 00:02:41.866 15:55:27 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:41.866 * Looking for test storage... 00:02:41.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:41.866 15:55:27 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:41.866 15:55:27 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:41.866 15:55:27 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:41.866 15:55:27 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:41.866 15:55:27 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:41.866 15:55:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:41.866 ************************************ 00:02:41.866 START TEST acl 00:02:41.866 ************************************ 00:02:41.866 15:55:27 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:42.141 * Looking for test storage... 00:02:42.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:42.141 15:55:27 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:42.141 15:55:27 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:42.141 15:55:27 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:42.141 15:55:27 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:42.141 15:55:27 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:42.141 15:55:27 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:42.141 15:55:27 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:42.141 15:55:27 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:42.141 15:55:27 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:42.141 15:55:27 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:42.141 15:55:27 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:42.141 15:55:27 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:42.141 15:55:27 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:42.141 15:55:27 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:42.141 15:55:27 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:42.141 15:55:27 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:43.512 15:55:29 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:43.512 15:55:29 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:43.512 15:55:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.512 15:55:29 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:43.512 15:55:29 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.512 15:55:29 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:44.481 Hugepages 00:02:44.481 node hugesize free / total 00:02:44.481 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:44.481 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:44.481 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.481 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:44.481 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:44.481 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.481 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:44.481 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:44.481 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.481 00:02:44.481 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:44.481 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:44.481 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:44.481 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.739 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:44.739 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.739 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.739 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.739 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:44.739 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.739 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.739 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:44.740 15:55:30 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:44.740 15:55:30 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:44.740 15:55:30 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:44.740 15:55:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:44.740 ************************************ 00:02:44.740 START TEST denied 00:02:44.740 ************************************ 00:02:44.740 15:55:30 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:44.740 15:55:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:02:44.740 15:55:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:44.740 15:55:30 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:02:44.740 15:55:30 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.740 15:55:30 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:46.641 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:02:46.641 15:55:32 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:02:46.641 15:55:32 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:46.641 15:55:32 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:46.641 15:55:32 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:02:46.641 15:55:32 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:02:46.641 15:55:32 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:46.641 15:55:32 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:46.641 15:55:32 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:46.641 15:55:32 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:46.641 15:55:32 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:49.192 00:02:49.192 real 0m4.119s 00:02:49.192 user 0m1.226s 00:02:49.192 sys 0m1.918s 00:02:49.192 15:55:34 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:49.192 15:55:34 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:49.192 ************************************ 00:02:49.192 END TEST denied 00:02:49.192 ************************************ 00:02:49.192 15:55:34 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:49.192 15:55:34 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:49.192 15:55:34 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:49.192 15:55:34 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:49.192 15:55:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:49.192 ************************************ 00:02:49.192 START TEST allowed 00:02:49.192 ************************************ 00:02:49.192 15:55:34 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:49.192 15:55:34 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:02:49.192 15:55:34 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:49.192 15:55:34 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:02:49.192 15:55:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.192 15:55:34 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:51.720 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:02:51.720 15:55:37 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:51.720 15:55:37 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:51.720 15:55:37 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:51.720 15:55:37 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:51.720 15:55:37 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.094 00:02:53.094 real 0m3.870s 00:02:53.094 user 0m1.037s 00:02:53.094 sys 0m1.690s 00:02:53.094 15:55:38 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:53.094 15:55:38 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:53.094 ************************************ 00:02:53.094 END TEST allowed 00:02:53.094 ************************************ 00:02:53.094 15:55:38 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:53.094 00:02:53.094 real 0m10.849s 00:02:53.094 user 0m3.409s 00:02:53.094 sys 0m5.393s 00:02:53.094 15:55:38 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:53.094 15:55:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:53.094 ************************************ 00:02:53.094 END TEST acl 00:02:53.094 ************************************ 00:02:53.094 15:55:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:53.094 15:55:38 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:53.094 15:55:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:53.094 15:55:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:53.094 15:55:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:53.094 ************************************ 00:02:53.094 START TEST hugepages 00:02:53.094 ************************************ 00:02:53.094 15:55:38 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:53.094 * Looking for test storage... 00:02:53.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44964772 kB' 'MemAvailable: 48421304 kB' 'Buffers: 2704 kB' 'Cached: 9160172 kB' 'SwapCached: 0 kB' 'Active: 6122116 kB' 'Inactive: 3481212 kB' 'Active(anon): 5736072 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 444072 kB' 'Mapped: 168640 kB' 'Shmem: 5295620 kB' 'KReclaimable: 166232 kB' 'Slab: 494280 kB' 'SReclaimable: 166232 kB' 'SUnreclaim: 328048 kB' 'KernelStack: 12800 kB' 'PageTables: 7688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562312 kB' 'Committed_AS: 6857424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.094 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.095 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:53.096 15:55:38 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:53.096 15:55:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:53.096 15:55:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:53.096 15:55:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:53.096 ************************************ 00:02:53.096 START TEST default_setup 00:02:53.096 ************************************ 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.096 15:55:38 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:54.030 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:54.290 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:54.290 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:54.290 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:54.290 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:54.290 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:54.290 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:54.290 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:54.290 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:54.290 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:54.290 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:54.290 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:54.290 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:54.290 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:54.290 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:54.290 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:55.225 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47068164 kB' 'MemAvailable: 50524800 kB' 'Buffers: 2704 kB' 'Cached: 9160268 kB' 'SwapCached: 0 kB' 'Active: 6134784 kB' 'Inactive: 3481212 kB' 'Active(anon): 5748740 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456396 kB' 'Mapped: 168248 kB' 'Shmem: 5295716 kB' 'KReclaimable: 166436 kB' 'Slab: 494084 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 327648 kB' 'KernelStack: 12736 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6869740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195888 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.489 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.490 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47072768 kB' 'MemAvailable: 50529404 kB' 'Buffers: 2704 kB' 'Cached: 9160272 kB' 'SwapCached: 0 kB' 'Active: 6133836 kB' 'Inactive: 3481212 kB' 'Active(anon): 5747792 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455408 kB' 'Mapped: 168324 kB' 'Shmem: 5295720 kB' 'KReclaimable: 166436 kB' 'Slab: 494116 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 327680 kB' 'KernelStack: 12656 kB' 'PageTables: 7460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6869760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.492 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47073640 kB' 'MemAvailable: 50530276 kB' 'Buffers: 2704 kB' 'Cached: 9160288 kB' 'SwapCached: 0 kB' 'Active: 6133776 kB' 'Inactive: 3481212 kB' 'Active(anon): 5747732 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455256 kB' 'Mapped: 168248 kB' 'Shmem: 5295736 kB' 'KReclaimable: 166436 kB' 'Slab: 494092 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 327656 kB' 'KernelStack: 12656 kB' 'PageTables: 7456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6869780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.493 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:55.494 nr_hugepages=1024 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:55.494 resv_hugepages=0 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:55.494 surplus_hugepages=0 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:55.494 anon_hugepages=0 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.494 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47074332 kB' 'MemAvailable: 50530968 kB' 'Buffers: 2704 kB' 'Cached: 9160312 kB' 'SwapCached: 0 kB' 'Active: 6133764 kB' 'Inactive: 3481212 kB' 'Active(anon): 5747720 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455256 kB' 'Mapped: 168248 kB' 'Shmem: 5295760 kB' 'KReclaimable: 166436 kB' 'Slab: 494092 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 327656 kB' 'KernelStack: 12656 kB' 'PageTables: 7456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6869804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.495 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 28237644 kB' 'MemUsed: 4639296 kB' 'SwapCached: 0 kB' 'Active: 1523356 kB' 'Inactive: 188012 kB' 'Active(anon): 1395344 kB' 'Inactive(anon): 0 kB' 'Active(file): 128012 kB' 'Inactive(file): 188012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1561188 kB' 'Mapped: 109844 kB' 'AnonPages: 153356 kB' 'Shmem: 1245164 kB' 'KernelStack: 6648 kB' 'PageTables: 3316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 53076 kB' 'Slab: 226580 kB' 'SReclaimable: 53076 kB' 'SUnreclaim: 173504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.496 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.497 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.498 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.498 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:55.498 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:55.498 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:55.498 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.498 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:55.498 15:55:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:55.498 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:55.498 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:55.498 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:55.498 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:55.498 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:55.498 node0=1024 expecting 1024 00:02:55.498 15:55:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:55.498 00:02:55.498 real 0m2.556s 00:02:55.498 user 0m0.689s 00:02:55.498 sys 0m0.942s 00:02:55.498 15:55:41 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:55.498 15:55:41 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:55.498 ************************************ 00:02:55.498 END TEST default_setup 00:02:55.498 ************************************ 00:02:55.498 15:55:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:55.498 15:55:41 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:55.498 15:55:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:55.498 15:55:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:55.498 15:55:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:55.498 ************************************ 00:02:55.498 START TEST per_node_1G_alloc 00:02:55.498 ************************************ 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.498 15:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:56.879 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:56.879 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:56.879 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:56.879 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:56.879 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:56.879 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:56.879 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:56.879 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:56.879 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:56.879 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:56.879 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:56.879 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:56.879 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:56.879 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:56.879 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:56.879 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:56.879 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.879 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47075332 kB' 'MemAvailable: 50531968 kB' 'Buffers: 2704 kB' 'Cached: 9160384 kB' 'SwapCached: 0 kB' 'Active: 6134340 kB' 'Inactive: 3481212 kB' 'Active(anon): 5748296 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455616 kB' 'Mapped: 168368 kB' 'Shmem: 5295832 kB' 'KReclaimable: 166436 kB' 'Slab: 494220 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 327784 kB' 'KernelStack: 12640 kB' 'PageTables: 7416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6869984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.880 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47076108 kB' 'MemAvailable: 50532744 kB' 'Buffers: 2704 kB' 'Cached: 9160388 kB' 'SwapCached: 0 kB' 'Active: 6134224 kB' 'Inactive: 3481212 kB' 'Active(anon): 5748180 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455552 kB' 'Mapped: 168340 kB' 'Shmem: 5295836 kB' 'KReclaimable: 166436 kB' 'Slab: 494212 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 327776 kB' 'KernelStack: 12672 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6870004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.881 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.882 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47076108 kB' 'MemAvailable: 50532744 kB' 'Buffers: 2704 kB' 'Cached: 9160404 kB' 'SwapCached: 0 kB' 'Active: 6134568 kB' 'Inactive: 3481212 kB' 'Active(anon): 5748524 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455868 kB' 'Mapped: 168340 kB' 'Shmem: 5295852 kB' 'KReclaimable: 166436 kB' 'Slab: 494212 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 327776 kB' 'KernelStack: 12688 kB' 'PageTables: 7532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6870024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.883 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.884 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.885 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:57.146 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:57.146 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:57.146 nr_hugepages=1024 00:02:57.146 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:57.146 resv_hugepages=0 00:02:57.146 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:57.146 surplus_hugepages=0 00:02:57.146 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:57.146 anon_hugepages=0 00:02:57.146 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:57.146 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:57.146 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47077856 kB' 'MemAvailable: 50534492 kB' 'Buffers: 2704 kB' 'Cached: 9160428 kB' 'SwapCached: 0 kB' 'Active: 6134232 kB' 'Inactive: 3481212 kB' 'Active(anon): 5748188 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455516 kB' 'Mapped: 168340 kB' 'Shmem: 5295876 kB' 'KReclaimable: 166436 kB' 'Slab: 494212 kB' 'SReclaimable: 166436 kB' 'SUnreclaim: 327776 kB' 'KernelStack: 12656 kB' 'PageTables: 7428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6870048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.147 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 29286844 kB' 'MemUsed: 3590096 kB' 'SwapCached: 0 kB' 'Active: 1523608 kB' 'Inactive: 188012 kB' 'Active(anon): 1395596 kB' 'Inactive(anon): 0 kB' 'Active(file): 128012 kB' 'Inactive(file): 188012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1561196 kB' 'Mapped: 109932 kB' 'AnonPages: 153532 kB' 'Shmem: 1245172 kB' 'KernelStack: 6696 kB' 'PageTables: 3336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 53076 kB' 'Slab: 226580 kB' 'SReclaimable: 53076 kB' 'SUnreclaim: 173504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.148 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.149 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664784 kB' 'MemFree: 17790768 kB' 'MemUsed: 9874016 kB' 'SwapCached: 0 kB' 'Active: 4610780 kB' 'Inactive: 3293200 kB' 'Active(anon): 4352748 kB' 'Inactive(anon): 0 kB' 'Active(file): 258032 kB' 'Inactive(file): 3293200 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7601976 kB' 'Mapped: 58408 kB' 'AnonPages: 302120 kB' 'Shmem: 4050744 kB' 'KernelStack: 5960 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113360 kB' 'Slab: 267632 kB' 'SReclaimable: 113360 kB' 'SUnreclaim: 154272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.150 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:57.151 node0=512 expecting 512 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:57.151 node1=512 expecting 512 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:57.151 00:02:57.151 real 0m1.498s 00:02:57.151 user 0m0.627s 00:02:57.151 sys 0m0.825s 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:57.151 15:55:42 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:57.151 ************************************ 00:02:57.151 END TEST per_node_1G_alloc 00:02:57.151 ************************************ 00:02:57.151 15:55:42 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:57.151 15:55:42 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:57.151 15:55:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:57.151 15:55:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:57.151 15:55:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:57.151 ************************************ 00:02:57.151 START TEST even_2G_alloc 00:02:57.151 ************************************ 00:02:57.151 15:55:42 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:02:57.151 15:55:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:57.151 15:55:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:57.151 15:55:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:57.151 15:55:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:57.151 15:55:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:57.151 15:55:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:57.151 15:55:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:57.151 15:55:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:57.151 15:55:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:57.151 15:55:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:57.151 15:55:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:57.151 15:55:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:57.151 15:55:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:57.151 15:55:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:57.151 15:55:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.151 15:55:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:57.151 15:55:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:57.151 15:55:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:57.151 15:55:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.151 15:55:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:57.151 15:55:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:57.151 15:55:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:57.151 15:55:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.151 15:55:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:57.151 15:55:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:57.151 15:55:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:57.151 15:55:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.151 15:55:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:58.554 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:58.554 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:58.554 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:58.554 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:58.554 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:58.554 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:58.554 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:58.554 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:58.554 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:58.554 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:58.554 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:58.554 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:58.554 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:58.554 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:58.554 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:58.554 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:58.554 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:58.554 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:58.554 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:58.554 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:58.554 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:58.554 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:58.554 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:58.554 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:58.554 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:58.554 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:58.554 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47065088 kB' 'MemAvailable: 50521720 kB' 'Buffers: 2704 kB' 'Cached: 9160520 kB' 'SwapCached: 0 kB' 'Active: 6134752 kB' 'Inactive: 3481212 kB' 'Active(anon): 5748708 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455960 kB' 'Mapped: 168360 kB' 'Shmem: 5295968 kB' 'KReclaimable: 166428 kB' 'Slab: 494456 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 328028 kB' 'KernelStack: 12656 kB' 'PageTables: 7444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6870248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.555 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47069320 kB' 'MemAvailable: 50525952 kB' 'Buffers: 2704 kB' 'Cached: 9160524 kB' 'SwapCached: 0 kB' 'Active: 6134644 kB' 'Inactive: 3481212 kB' 'Active(anon): 5748600 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455860 kB' 'Mapped: 168352 kB' 'Shmem: 5295972 kB' 'KReclaimable: 166428 kB' 'Slab: 494420 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 327992 kB' 'KernelStack: 12688 kB' 'PageTables: 7520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6870268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.556 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.557 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47070484 kB' 'MemAvailable: 50527116 kB' 'Buffers: 2704 kB' 'Cached: 9160540 kB' 'SwapCached: 0 kB' 'Active: 6134320 kB' 'Inactive: 3481212 kB' 'Active(anon): 5748276 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455460 kB' 'Mapped: 168272 kB' 'Shmem: 5295988 kB' 'KReclaimable: 166428 kB' 'Slab: 494420 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 327992 kB' 'KernelStack: 12656 kB' 'PageTables: 7400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6870288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.558 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.559 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:58.560 nr_hugepages=1024 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:58.560 resv_hugepages=0 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:58.560 surplus_hugepages=0 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:58.560 anon_hugepages=0 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47069984 kB' 'MemAvailable: 50526616 kB' 'Buffers: 2704 kB' 'Cached: 9160560 kB' 'SwapCached: 0 kB' 'Active: 6134628 kB' 'Inactive: 3481212 kB' 'Active(anon): 5748584 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455800 kB' 'Mapped: 168272 kB' 'Shmem: 5296008 kB' 'KReclaimable: 166428 kB' 'Slab: 494420 kB' 'SReclaimable: 166428 kB' 'SUnreclaim: 327992 kB' 'KernelStack: 12688 kB' 'PageTables: 7504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6870312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.560 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.561 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 29291580 kB' 'MemUsed: 3585360 kB' 'SwapCached: 0 kB' 'Active: 1523512 kB' 'Inactive: 188012 kB' 'Active(anon): 1395500 kB' 'Inactive(anon): 0 kB' 'Active(file): 128012 kB' 'Inactive(file): 188012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1561208 kB' 'Mapped: 110304 kB' 'AnonPages: 153452 kB' 'Shmem: 1245184 kB' 'KernelStack: 6664 kB' 'PageTables: 3264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 53068 kB' 'Slab: 226524 kB' 'SReclaimable: 53068 kB' 'SUnreclaim: 173456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.562 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664784 kB' 'MemFree: 17776640 kB' 'MemUsed: 9888144 kB' 'SwapCached: 0 kB' 'Active: 4613288 kB' 'Inactive: 3293200 kB' 'Active(anon): 4355256 kB' 'Inactive(anon): 0 kB' 'Active(file): 258032 kB' 'Inactive(file): 3293200 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7602100 kB' 'Mapped: 58404 kB' 'AnonPages: 304488 kB' 'Shmem: 4050868 kB' 'KernelStack: 5960 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113360 kB' 'Slab: 267896 kB' 'SReclaimable: 113360 kB' 'SUnreclaim: 154536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.563 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:58.564 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:58.565 node0=512 expecting 512 00:02:58.565 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:58.565 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:58.565 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:58.565 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:58.565 node1=512 expecting 512 00:02:58.565 15:55:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:58.565 00:02:58.565 real 0m1.506s 00:02:58.565 user 0m0.630s 00:02:58.565 sys 0m0.840s 00:02:58.565 15:55:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:58.565 15:55:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:58.565 ************************************ 00:02:58.565 END TEST even_2G_alloc 00:02:58.565 ************************************ 00:02:58.565 15:55:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:58.565 15:55:44 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:58.565 15:55:44 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:58.565 15:55:44 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:58.565 15:55:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:58.565 ************************************ 00:02:58.565 START TEST odd_alloc 00:02:58.565 ************************************ 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:58.565 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:58.823 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:58.823 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:58.823 15:55:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:58.823 15:55:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.823 15:55:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:59.781 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:59.781 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:59.781 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:59.781 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:59.781 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:59.781 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:59.781 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:59.781 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:59.781 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:59.781 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:59.781 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:59.781 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:59.781 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:59.781 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:59.781 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:59.781 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:59.781 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47066384 kB' 'MemAvailable: 50522996 kB' 'Buffers: 2704 kB' 'Cached: 9160656 kB' 'SwapCached: 0 kB' 'Active: 6131288 kB' 'Inactive: 3481212 kB' 'Active(anon): 5745244 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452344 kB' 'Mapped: 167240 kB' 'Shmem: 5296104 kB' 'KReclaimable: 166388 kB' 'Slab: 494188 kB' 'SReclaimable: 166388 kB' 'SUnreclaim: 327800 kB' 'KernelStack: 12608 kB' 'PageTables: 7100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 6855936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.048 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47066680 kB' 'MemAvailable: 50523292 kB' 'Buffers: 2704 kB' 'Cached: 9160660 kB' 'SwapCached: 0 kB' 'Active: 6131212 kB' 'Inactive: 3481212 kB' 'Active(anon): 5745168 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452304 kB' 'Mapped: 167292 kB' 'Shmem: 5296108 kB' 'KReclaimable: 166388 kB' 'Slab: 494216 kB' 'SReclaimable: 166388 kB' 'SUnreclaim: 327828 kB' 'KernelStack: 12624 kB' 'PageTables: 7156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 6855956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.049 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.050 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47066352 kB' 'MemAvailable: 50522964 kB' 'Buffers: 2704 kB' 'Cached: 9160676 kB' 'SwapCached: 0 kB' 'Active: 6131108 kB' 'Inactive: 3481212 kB' 'Active(anon): 5745064 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452152 kB' 'Mapped: 167212 kB' 'Shmem: 5296124 kB' 'KReclaimable: 166388 kB' 'Slab: 494256 kB' 'SReclaimable: 166388 kB' 'SUnreclaim: 327868 kB' 'KernelStack: 12640 kB' 'PageTables: 7148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 6855976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.051 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.052 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:00.053 nr_hugepages=1025 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:00.053 resv_hugepages=0 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:00.053 surplus_hugepages=0 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:00.053 anon_hugepages=0 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47066352 kB' 'MemAvailable: 50522964 kB' 'Buffers: 2704 kB' 'Cached: 9160696 kB' 'SwapCached: 0 kB' 'Active: 6131104 kB' 'Inactive: 3481212 kB' 'Active(anon): 5745060 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452116 kB' 'Mapped: 167212 kB' 'Shmem: 5296144 kB' 'KReclaimable: 166388 kB' 'Slab: 494256 kB' 'SReclaimable: 166388 kB' 'SUnreclaim: 327868 kB' 'KernelStack: 12624 kB' 'PageTables: 7096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 6855996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.053 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:00.054 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 29284432 kB' 'MemUsed: 3592508 kB' 'SwapCached: 0 kB' 'Active: 1520948 kB' 'Inactive: 188012 kB' 'Active(anon): 1392936 kB' 'Inactive(anon): 0 kB' 'Active(file): 128012 kB' 'Inactive(file): 188012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1561224 kB' 'Mapped: 109880 kB' 'AnonPages: 150812 kB' 'Shmem: 1245200 kB' 'KernelStack: 6680 kB' 'PageTables: 3124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 53060 kB' 'Slab: 226408 kB' 'SReclaimable: 53060 kB' 'SUnreclaim: 173348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.055 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664784 kB' 'MemFree: 17782172 kB' 'MemUsed: 9882612 kB' 'SwapCached: 0 kB' 'Active: 4609920 kB' 'Inactive: 3293200 kB' 'Active(anon): 4351888 kB' 'Inactive(anon): 0 kB' 'Active(file): 258032 kB' 'Inactive(file): 3293200 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7602220 kB' 'Mapped: 57332 kB' 'AnonPages: 301064 kB' 'Shmem: 4050988 kB' 'KernelStack: 5944 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113328 kB' 'Slab: 267848 kB' 'SReclaimable: 113328 kB' 'SUnreclaim: 154520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.056 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:00.057 node0=512 expecting 513 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:00.057 node1=513 expecting 512 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:00.057 00:03:00.057 real 0m1.447s 00:03:00.057 user 0m0.593s 00:03:00.057 sys 0m0.814s 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:00.057 15:55:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:00.057 ************************************ 00:03:00.057 END TEST odd_alloc 00:03:00.057 ************************************ 00:03:00.057 15:55:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:00.057 15:55:46 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:00.057 15:55:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:00.057 15:55:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:00.057 15:55:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:00.370 ************************************ 00:03:00.370 START TEST custom_alloc 00:03:00.370 ************************************ 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:00.370 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.371 15:55:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:01.308 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:01.308 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:01.308 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:01.308 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:01.308 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:01.308 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:01.308 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:01.308 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:01.308 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:01.308 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:01.308 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:01.308 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:01.308 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:01.308 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:01.308 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:01.308 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:01.308 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 46031320 kB' 'MemAvailable: 49487932 kB' 'Buffers: 2704 kB' 'Cached: 9160788 kB' 'SwapCached: 0 kB' 'Active: 6131832 kB' 'Inactive: 3481212 kB' 'Active(anon): 5745788 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452792 kB' 'Mapped: 167312 kB' 'Shmem: 5296236 kB' 'KReclaimable: 166388 kB' 'Slab: 494008 kB' 'SReclaimable: 166388 kB' 'SUnreclaim: 327620 kB' 'KernelStack: 12656 kB' 'PageTables: 7132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 6856196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.572 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.573 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 46031320 kB' 'MemAvailable: 49487932 kB' 'Buffers: 2704 kB' 'Cached: 9160792 kB' 'SwapCached: 0 kB' 'Active: 6131280 kB' 'Inactive: 3481212 kB' 'Active(anon): 5745236 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452264 kB' 'Mapped: 167300 kB' 'Shmem: 5296240 kB' 'KReclaimable: 166388 kB' 'Slab: 494008 kB' 'SReclaimable: 166388 kB' 'SUnreclaim: 327620 kB' 'KernelStack: 12656 kB' 'PageTables: 7128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 6856216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.574 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.575 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 46031320 kB' 'MemAvailable: 49487932 kB' 'Buffers: 2704 kB' 'Cached: 9160804 kB' 'SwapCached: 0 kB' 'Active: 6131156 kB' 'Inactive: 3481212 kB' 'Active(anon): 5745112 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452080 kB' 'Mapped: 167224 kB' 'Shmem: 5296252 kB' 'KReclaimable: 166388 kB' 'Slab: 494024 kB' 'SReclaimable: 166388 kB' 'SUnreclaim: 327636 kB' 'KernelStack: 12640 kB' 'PageTables: 7080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 6856236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.576 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:01.577 nr_hugepages=1536 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:01.577 resv_hugepages=0 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:01.577 surplus_hugepages=0 00:03:01.577 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:01.577 anon_hugepages=0 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 46031068 kB' 'MemAvailable: 49487680 kB' 'Buffers: 2704 kB' 'Cached: 9160832 kB' 'SwapCached: 0 kB' 'Active: 6131188 kB' 'Inactive: 3481212 kB' 'Active(anon): 5745144 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452088 kB' 'Mapped: 167224 kB' 'Shmem: 5296280 kB' 'KReclaimable: 166388 kB' 'Slab: 494024 kB' 'SReclaimable: 166388 kB' 'SUnreclaim: 327636 kB' 'KernelStack: 12640 kB' 'PageTables: 7080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 6856256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.578 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.579 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 29286048 kB' 'MemUsed: 3590892 kB' 'SwapCached: 0 kB' 'Active: 1521484 kB' 'Inactive: 188012 kB' 'Active(anon): 1393472 kB' 'Inactive(anon): 0 kB' 'Active(file): 128012 kB' 'Inactive(file): 188012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1561304 kB' 'Mapped: 109892 kB' 'AnonPages: 151384 kB' 'Shmem: 1245280 kB' 'KernelStack: 6744 kB' 'PageTables: 3264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 53060 kB' 'Slab: 226424 kB' 'SReclaimable: 53060 kB' 'SUnreclaim: 173364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.580 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664784 kB' 'MemFree: 16744772 kB' 'MemUsed: 10920012 kB' 'SwapCached: 0 kB' 'Active: 4609876 kB' 'Inactive: 3293200 kB' 'Active(anon): 4351844 kB' 'Inactive(anon): 0 kB' 'Active(file): 258032 kB' 'Inactive(file): 3293200 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7602252 kB' 'Mapped: 57332 kB' 'AnonPages: 300896 kB' 'Shmem: 4051020 kB' 'KernelStack: 5912 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113328 kB' 'Slab: 267600 kB' 'SReclaimable: 113328 kB' 'SUnreclaim: 154272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.581 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:01.582 node0=512 expecting 512 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:01.582 node1=1024 expecting 1024 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:01.582 00:03:01.582 real 0m1.515s 00:03:01.582 user 0m0.651s 00:03:01.582 sys 0m0.830s 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:01.582 15:55:47 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:01.582 ************************************ 00:03:01.582 END TEST custom_alloc 00:03:01.582 ************************************ 00:03:01.841 15:55:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:01.841 15:55:47 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:01.841 15:55:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:01.841 15:55:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.841 15:55:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:01.841 ************************************ 00:03:01.841 START TEST no_shrink_alloc 00:03:01.841 ************************************ 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.841 15:55:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:02.777 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:02.777 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:02.777 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:02.777 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:02.777 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:02.777 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:03.039 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:03.039 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:03.039 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:03.039 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:03.039 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:03.039 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:03.040 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:03.040 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:03.040 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:03.040 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:03.040 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47040096 kB' 'MemAvailable: 50496708 kB' 'Buffers: 2704 kB' 'Cached: 9160916 kB' 'SwapCached: 0 kB' 'Active: 6131576 kB' 'Inactive: 3481212 kB' 'Active(anon): 5745532 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452472 kB' 'Mapped: 167188 kB' 'Shmem: 5296364 kB' 'KReclaimable: 166388 kB' 'Slab: 493948 kB' 'SReclaimable: 166388 kB' 'SUnreclaim: 327560 kB' 'KernelStack: 12608 kB' 'PageTables: 6944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6856652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.040 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.041 15:55:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47040096 kB' 'MemAvailable: 50496708 kB' 'Buffers: 2704 kB' 'Cached: 9160916 kB' 'SwapCached: 0 kB' 'Active: 6131640 kB' 'Inactive: 3481212 kB' 'Active(anon): 5745596 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452556 kB' 'Mapped: 167312 kB' 'Shmem: 5296364 kB' 'KReclaimable: 166388 kB' 'Slab: 493980 kB' 'SReclaimable: 166388 kB' 'SUnreclaim: 327592 kB' 'KernelStack: 12672 kB' 'PageTables: 7148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6856668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195904 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.041 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.042 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47040760 kB' 'MemAvailable: 50497372 kB' 'Buffers: 2704 kB' 'Cached: 9160936 kB' 'SwapCached: 0 kB' 'Active: 6131812 kB' 'Inactive: 3481212 kB' 'Active(anon): 5745768 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452728 kB' 'Mapped: 167236 kB' 'Shmem: 5296384 kB' 'KReclaimable: 166388 kB' 'Slab: 493964 kB' 'SReclaimable: 166388 kB' 'SUnreclaim: 327576 kB' 'KernelStack: 12688 kB' 'PageTables: 7188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6857688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195904 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.043 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.044 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.308 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:03.309 nr_hugepages=1024 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:03.309 resv_hugepages=0 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:03.309 surplus_hugepages=0 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:03.309 anon_hugepages=0 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47041632 kB' 'MemAvailable: 50498244 kB' 'Buffers: 2704 kB' 'Cached: 9160936 kB' 'SwapCached: 0 kB' 'Active: 6132120 kB' 'Inactive: 3481212 kB' 'Active(anon): 5746076 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453004 kB' 'Mapped: 167236 kB' 'Shmem: 5296384 kB' 'KReclaimable: 166388 kB' 'Slab: 493964 kB' 'SReclaimable: 166388 kB' 'SUnreclaim: 327576 kB' 'KernelStack: 12768 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6859072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.309 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:03.310 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 28225800 kB' 'MemUsed: 4651140 kB' 'SwapCached: 0 kB' 'Active: 1524004 kB' 'Inactive: 188012 kB' 'Active(anon): 1395992 kB' 'Inactive(anon): 0 kB' 'Active(file): 128012 kB' 'Inactive(file): 188012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1561368 kB' 'Mapped: 110340 kB' 'AnonPages: 153772 kB' 'Shmem: 1245344 kB' 'KernelStack: 6856 kB' 'PageTables: 3504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 53060 kB' 'Slab: 226492 kB' 'SReclaimable: 53060 kB' 'SUnreclaim: 173432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.311 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:03.312 node0=1024 expecting 1024 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.312 15:55:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:04.695 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:04.695 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:04.695 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:04.695 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:04.695 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:04.695 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:04.695 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:04.695 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:04.695 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:04.695 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:04.695 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:04.695 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:04.695 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:04.695 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:04.695 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:04.695 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:04.695 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:04.695 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47038008 kB' 'MemAvailable: 50494592 kB' 'Buffers: 2704 kB' 'Cached: 9161024 kB' 'SwapCached: 0 kB' 'Active: 6131908 kB' 'Inactive: 3481212 kB' 'Active(anon): 5745864 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452612 kB' 'Mapped: 167396 kB' 'Shmem: 5296472 kB' 'KReclaimable: 166332 kB' 'Slab: 493740 kB' 'SReclaimable: 166332 kB' 'SUnreclaim: 327408 kB' 'KernelStack: 12672 kB' 'PageTables: 7148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6857052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.695 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.696 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47037276 kB' 'MemAvailable: 50493860 kB' 'Buffers: 2704 kB' 'Cached: 9161028 kB' 'SwapCached: 0 kB' 'Active: 6131696 kB' 'Inactive: 3481212 kB' 'Active(anon): 5745652 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452464 kB' 'Mapped: 167248 kB' 'Shmem: 5296476 kB' 'KReclaimable: 166332 kB' 'Slab: 493736 kB' 'SReclaimable: 166332 kB' 'SUnreclaim: 327404 kB' 'KernelStack: 12688 kB' 'PageTables: 7172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6857068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.697 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.698 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47040524 kB' 'MemAvailable: 50497108 kB' 'Buffers: 2704 kB' 'Cached: 9161032 kB' 'SwapCached: 0 kB' 'Active: 6131732 kB' 'Inactive: 3481212 kB' 'Active(anon): 5745688 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452512 kB' 'Mapped: 167248 kB' 'Shmem: 5296480 kB' 'KReclaimable: 166332 kB' 'Slab: 493816 kB' 'SReclaimable: 166332 kB' 'SUnreclaim: 327484 kB' 'KernelStack: 12688 kB' 'PageTables: 7152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6863080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.699 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.700 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:04.701 nr_hugepages=1024 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:04.701 resv_hugepages=0 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:04.701 surplus_hugepages=0 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:04.701 anon_hugepages=0 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 47040204 kB' 'MemAvailable: 50496788 kB' 'Buffers: 2704 kB' 'Cached: 9161068 kB' 'SwapCached: 0 kB' 'Active: 6131604 kB' 'Inactive: 3481212 kB' 'Active(anon): 5745560 kB' 'Inactive(anon): 0 kB' 'Active(file): 386044 kB' 'Inactive(file): 3481212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452364 kB' 'Mapped: 167248 kB' 'Shmem: 5296516 kB' 'KReclaimable: 166332 kB' 'Slab: 493808 kB' 'SReclaimable: 166332 kB' 'SUnreclaim: 327476 kB' 'KernelStack: 12688 kB' 'PageTables: 7160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 6856748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1338972 kB' 'DirectMap2M: 13260800 kB' 'DirectMap1G: 54525952 kB' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.701 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:04.702 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 28228816 kB' 'MemUsed: 4648124 kB' 'SwapCached: 0 kB' 'Active: 1521732 kB' 'Inactive: 188012 kB' 'Active(anon): 1393720 kB' 'Inactive(anon): 0 kB' 'Active(file): 128012 kB' 'Inactive(file): 188012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1561376 kB' 'Mapped: 109916 kB' 'AnonPages: 151580 kB' 'Shmem: 1245352 kB' 'KernelStack: 6728 kB' 'PageTables: 3212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 53036 kB' 'Slab: 226504 kB' 'SReclaimable: 53036 kB' 'SUnreclaim: 173468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.703 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:04.704 node0=1024 expecting 1024 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:04.704 00:03:04.704 real 0m2.963s 00:03:04.704 user 0m1.233s 00:03:04.704 sys 0m1.656s 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:04.704 15:55:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:04.704 ************************************ 00:03:04.704 END TEST no_shrink_alloc 00:03:04.704 ************************************ 00:03:04.704 15:55:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:04.704 15:55:50 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:04.704 15:55:50 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:04.704 15:55:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:04.704 15:55:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.704 15:55:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:04.704 15:55:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.704 15:55:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:04.704 15:55:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:04.704 15:55:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.704 15:55:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:04.704 15:55:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.704 15:55:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:04.704 15:55:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:04.704 15:55:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:04.704 00:03:04.704 real 0m11.878s 00:03:04.704 user 0m4.597s 00:03:04.704 sys 0m6.149s 00:03:04.704 15:55:50 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:04.704 15:55:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:04.704 ************************************ 00:03:04.704 END TEST hugepages 00:03:04.704 ************************************ 00:03:04.704 15:55:50 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:04.704 15:55:50 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:04.704 15:55:50 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:04.704 15:55:50 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.704 15:55:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:04.704 ************************************ 00:03:04.704 START TEST driver 00:03:04.704 ************************************ 00:03:04.704 15:55:50 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:04.962 * Looking for test storage... 00:03:04.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:04.962 15:55:50 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:04.962 15:55:50 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:04.962 15:55:50 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.491 15:55:53 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:07.491 15:55:53 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:07.491 15:55:53 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:07.491 15:55:53 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:07.491 ************************************ 00:03:07.491 START TEST guess_driver 00:03:07.491 ************************************ 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:07.491 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:07.491 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:07.491 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:07.491 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:07.491 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:07.491 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:07.491 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:07.491 Looking for driver=vfio-pci 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.491 15:55:53 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.861 15:55:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.794 15:55:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.794 15:55:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.794 15:55:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.068 15:55:55 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:10.068 15:55:55 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:10.068 15:55:55 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.068 15:55:55 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:12.599 00:03:12.599 real 0m5.182s 00:03:12.599 user 0m1.103s 00:03:12.599 sys 0m1.925s 00:03:12.599 15:55:58 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:12.599 15:55:58 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:12.599 ************************************ 00:03:12.599 END TEST guess_driver 00:03:12.599 ************************************ 00:03:12.599 15:55:58 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:12.599 00:03:12.599 real 0m7.840s 00:03:12.599 user 0m1.682s 00:03:12.599 sys 0m2.970s 00:03:12.599 15:55:58 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:12.599 15:55:58 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:12.599 ************************************ 00:03:12.599 END TEST driver 00:03:12.599 ************************************ 00:03:12.599 15:55:58 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:12.599 15:55:58 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:12.599 15:55:58 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:12.599 15:55:58 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.599 15:55:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:12.599 ************************************ 00:03:12.599 START TEST devices 00:03:12.599 ************************************ 00:03:12.599 15:55:58 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:12.599 * Looking for test storage... 00:03:12.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:12.857 15:55:58 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:12.857 15:55:58 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:12.857 15:55:58 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:12.857 15:55:58 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:14.238 15:56:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:14.238 15:56:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:14.238 15:56:00 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:14.238 15:56:00 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:14.238 15:56:00 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:14.238 15:56:00 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:14.238 15:56:00 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:14.238 15:56:00 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:14.238 15:56:00 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:14.238 15:56:00 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:14.238 No valid GPT data, bailing 00:03:14.238 15:56:00 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:14.238 15:56:00 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:14.238 15:56:00 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:14.238 15:56:00 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:14.238 15:56:00 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:14.238 15:56:00 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:14.238 15:56:00 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:14.238 15:56:00 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.238 15:56:00 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.238 15:56:00 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:14.238 ************************************ 00:03:14.238 START TEST nvme_mount 00:03:14.238 ************************************ 00:03:14.238 15:56:00 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:14.239 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:15.620 Creating new GPT entries in memory. 00:03:15.620 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:15.620 other utilities. 00:03:15.620 15:56:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:15.620 15:56:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:15.620 15:56:01 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:15.620 15:56:01 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:15.620 15:56:01 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:16.557 Creating new GPT entries in memory. 00:03:16.557 The operation has completed successfully. 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 650101 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.557 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.491 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:17.491 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.491 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:17.491 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:17.492 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.752 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:17.752 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:17.752 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.752 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:17.752 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:17.752 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:17.752 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.752 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.752 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:17.752 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:17.752 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:17.752 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:17.752 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:18.011 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:18.011 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:18.011 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:18.011 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.011 15:56:03 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:18.975 15:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.975 15:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.975 15:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.975 15:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.975 15:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.975 15:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.975 15:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.975 15:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.975 15:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.975 15:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.975 15:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.975 15:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.975 15:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.975 15:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.975 15:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:18.975 15:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:19.238 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.239 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:19.239 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:19.239 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.239 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:03:19.239 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:19.239 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:19.239 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:19.239 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:19.239 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:19.239 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:19.239 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:19.239 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.239 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:19.239 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:19.239 15:56:05 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.239 15:56:05 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:20.621 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:20.621 00:03:20.621 real 0m6.447s 00:03:20.621 user 0m1.529s 00:03:20.621 sys 0m2.471s 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.621 15:56:06 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:20.621 ************************************ 00:03:20.621 END TEST nvme_mount 00:03:20.621 ************************************ 00:03:20.881 15:56:06 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:20.881 15:56:06 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:20.881 15:56:06 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.881 15:56:06 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.881 15:56:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:20.881 ************************************ 00:03:20.881 START TEST dm_mount 00:03:20.881 ************************************ 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:20.881 15:56:06 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:21.821 Creating new GPT entries in memory. 00:03:21.821 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:21.821 other utilities. 00:03:21.821 15:56:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:21.821 15:56:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:21.821 15:56:07 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:21.821 15:56:07 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:21.821 15:56:07 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:22.759 Creating new GPT entries in memory. 00:03:22.759 The operation has completed successfully. 00:03:22.759 15:56:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:22.759 15:56:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:22.759 15:56:08 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:22.759 15:56:08 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:22.759 15:56:08 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:24.140 The operation has completed successfully. 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 652497 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.140 15:56:09 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:25.078 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:25.078 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.078 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:25.078 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.078 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:25.078 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.078 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:25.078 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.078 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:25.078 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.078 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:25.078 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.078 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:25.078 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.078 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:25.078 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.078 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:25.079 15:56:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.336 15:56:11 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:26.271 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.271 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.271 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.271 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.271 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.271 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.271 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.271 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.271 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.271 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.271 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.271 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.271 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.271 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.271 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.271 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:26.530 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:26.530 15:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:26.789 00:03:26.789 real 0m5.869s 00:03:26.789 user 0m0.980s 00:03:26.789 sys 0m1.745s 00:03:26.789 15:56:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.789 15:56:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:26.789 ************************************ 00:03:26.789 END TEST dm_mount 00:03:26.789 ************************************ 00:03:26.789 15:56:12 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:26.789 15:56:12 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:26.789 15:56:12 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:26.790 15:56:12 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.790 15:56:12 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:26.790 15:56:12 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:26.790 15:56:12 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:26.790 15:56:12 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:27.049 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:27.049 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:27.049 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:27.049 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:27.049 15:56:12 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:27.049 15:56:12 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.049 15:56:12 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:27.049 15:56:12 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:27.049 15:56:12 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:27.049 15:56:12 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:27.049 15:56:12 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:27.049 00:03:27.049 real 0m14.302s 00:03:27.049 user 0m3.210s 00:03:27.049 sys 0m5.256s 00:03:27.049 15:56:12 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.049 15:56:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:27.049 ************************************ 00:03:27.049 END TEST devices 00:03:27.049 ************************************ 00:03:27.049 15:56:12 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:27.049 00:03:27.049 real 0m45.117s 00:03:27.049 user 0m13.000s 00:03:27.049 sys 0m19.931s 00:03:27.049 15:56:12 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.049 15:56:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:27.049 ************************************ 00:03:27.049 END TEST setup.sh 00:03:27.049 ************************************ 00:03:27.049 15:56:12 -- common/autotest_common.sh@1142 -- # return 0 00:03:27.049 15:56:12 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:28.423 Hugepages 00:03:28.423 node hugesize free / total 00:03:28.423 node0 1048576kB 0 / 0 00:03:28.423 node0 2048kB 2048 / 2048 00:03:28.423 node1 1048576kB 0 / 0 00:03:28.423 node1 2048kB 0 / 0 00:03:28.423 00:03:28.423 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:28.424 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:28.424 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:28.424 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:28.424 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:28.424 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:28.424 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:28.424 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:28.424 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:28.424 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:28.424 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:28.424 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:28.424 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:28.424 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:28.424 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:28.424 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:28.424 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:28.424 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:28.424 15:56:14 -- spdk/autotest.sh@130 -- # uname -s 00:03:28.424 15:56:14 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:28.424 15:56:14 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:28.424 15:56:14 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.802 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:29.802 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:29.802 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:29.802 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:29.802 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:29.802 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:29.802 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:29.802 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:29.802 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:29.802 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:29.802 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:29.802 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:29.802 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:29.802 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:29.802 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:29.802 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:30.737 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:30.737 15:56:16 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:32.113 15:56:17 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:32.113 15:56:17 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:32.113 15:56:17 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:32.113 15:56:17 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:32.113 15:56:17 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:32.113 15:56:17 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:32.113 15:56:17 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:32.113 15:56:17 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:32.113 15:56:17 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:32.113 15:56:17 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:32.113 15:56:17 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:03:32.113 15:56:17 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.047 Waiting for block devices as requested 00:03:33.047 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:33.307 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:33.307 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:33.307 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:33.566 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:33.566 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:33.566 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:33.566 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:33.826 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:03:33.826 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:34.084 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:34.084 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:34.084 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:34.084 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:34.344 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:34.344 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:34.344 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:34.603 15:56:20 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:34.603 15:56:20 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:03:34.603 15:56:20 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:34.603 15:56:20 -- common/autotest_common.sh@1502 -- # grep 0000:0b:00.0/nvme/nvme 00:03:34.603 15:56:20 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:34.603 15:56:20 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:03:34.603 15:56:20 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:34.603 15:56:20 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:34.603 15:56:20 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:34.603 15:56:20 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:34.603 15:56:20 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:34.603 15:56:20 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:34.603 15:56:20 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:34.603 15:56:20 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:34.603 15:56:20 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:34.603 15:56:20 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:34.603 15:56:20 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:34.603 15:56:20 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:34.603 15:56:20 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:34.603 15:56:20 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:34.603 15:56:20 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:34.603 15:56:20 -- common/autotest_common.sh@1557 -- # continue 00:03:34.603 15:56:20 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:34.603 15:56:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:34.603 15:56:20 -- common/autotest_common.sh@10 -- # set +x 00:03:34.603 15:56:20 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:34.603 15:56:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:34.603 15:56:20 -- common/autotest_common.sh@10 -- # set +x 00:03:34.603 15:56:20 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.978 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:35.978 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:35.978 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:35.978 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:35.978 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:35.978 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:35.978 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:35.978 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:35.978 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:35.978 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:35.978 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:35.978 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:35.978 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:35.978 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:35.978 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:35.978 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:36.916 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:36.916 15:56:22 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:36.916 15:56:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:36.916 15:56:22 -- common/autotest_common.sh@10 -- # set +x 00:03:37.174 15:56:22 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:37.174 15:56:22 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:37.174 15:56:22 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:37.174 15:56:22 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:37.174 15:56:22 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:37.174 15:56:22 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:37.174 15:56:22 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:37.174 15:56:22 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:37.174 15:56:22 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:37.174 15:56:22 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:37.174 15:56:22 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:37.174 15:56:22 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:37.174 15:56:22 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:03:37.174 15:56:22 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:37.174 15:56:22 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:03:37.174 15:56:23 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:37.174 15:56:23 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:37.174 15:56:23 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:37.174 15:56:23 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:0b:00.0 00:03:37.174 15:56:23 -- common/autotest_common.sh@1592 -- # [[ -z 0000:0b:00.0 ]] 00:03:37.174 15:56:23 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=657800 00:03:37.174 15:56:23 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.174 15:56:23 -- common/autotest_common.sh@1598 -- # waitforlisten 657800 00:03:37.174 15:56:23 -- common/autotest_common.sh@829 -- # '[' -z 657800 ']' 00:03:37.174 15:56:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:37.174 15:56:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:37.174 15:56:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:37.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:37.174 15:56:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:37.174 15:56:23 -- common/autotest_common.sh@10 -- # set +x 00:03:37.174 [2024-07-15 15:56:23.054343] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:03:37.174 [2024-07-15 15:56:23.054422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid657800 ] 00:03:37.174 EAL: No free 2048 kB hugepages reported on node 1 00:03:37.174 [2024-07-15 15:56:23.109844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.435 [2024-07-15 15:56:23.216188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.695 15:56:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:37.695 15:56:23 -- common/autotest_common.sh@862 -- # return 0 00:03:37.695 15:56:23 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:37.695 15:56:23 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:37.695 15:56:23 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:03:40.984 nvme0n1 00:03:40.984 15:56:26 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:40.984 [2024-07-15 15:56:26.741887] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:40.984 [2024-07-15 15:56:26.741931] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:40.984 request: 00:03:40.984 { 00:03:40.984 "nvme_ctrlr_name": "nvme0", 00:03:40.984 "password": "test", 00:03:40.984 "method": "bdev_nvme_opal_revert", 00:03:40.984 "req_id": 1 00:03:40.984 } 00:03:40.984 Got JSON-RPC error response 00:03:40.984 response: 00:03:40.984 { 00:03:40.984 "code": -32603, 00:03:40.984 "message": "Internal error" 00:03:40.984 } 00:03:40.984 15:56:26 -- common/autotest_common.sh@1604 -- # true 00:03:40.984 15:56:26 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:40.984 15:56:26 -- common/autotest_common.sh@1608 -- # killprocess 657800 00:03:40.984 15:56:26 -- common/autotest_common.sh@948 -- # '[' -z 657800 ']' 00:03:40.984 15:56:26 -- common/autotest_common.sh@952 -- # kill -0 657800 00:03:40.984 15:56:26 -- common/autotest_common.sh@953 -- # uname 00:03:40.984 15:56:26 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:40.984 15:56:26 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 657800 00:03:40.984 15:56:26 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:40.984 15:56:26 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:40.984 15:56:26 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 657800' 00:03:40.984 killing process with pid 657800 00:03:40.984 15:56:26 -- common/autotest_common.sh@967 -- # kill 657800 00:03:40.984 15:56:26 -- common/autotest_common.sh@972 -- # wait 657800 00:03:42.893 15:56:28 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:42.893 15:56:28 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:42.893 15:56:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:42.893 15:56:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:42.893 15:56:28 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:42.893 15:56:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:42.893 15:56:28 -- common/autotest_common.sh@10 -- # set +x 00:03:42.893 15:56:28 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:42.893 15:56:28 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:42.893 15:56:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.893 15:56:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.893 15:56:28 -- common/autotest_common.sh@10 -- # set +x 00:03:42.893 ************************************ 00:03:42.893 START TEST env 00:03:42.893 ************************************ 00:03:42.893 15:56:28 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:42.893 * Looking for test storage... 00:03:42.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:42.893 15:56:28 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:42.893 15:56:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.893 15:56:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.893 15:56:28 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.893 ************************************ 00:03:42.893 START TEST env_memory 00:03:42.893 ************************************ 00:03:42.893 15:56:28 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:42.893 00:03:42.893 00:03:42.893 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.893 http://cunit.sourceforge.net/ 00:03:42.893 00:03:42.893 00:03:42.893 Suite: memory 00:03:42.893 Test: alloc and free memory map ...[2024-07-15 15:56:28.722468] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:42.893 passed 00:03:42.893 Test: mem map translation ...[2024-07-15 15:56:28.743560] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:42.893 [2024-07-15 15:56:28.743581] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:42.893 [2024-07-15 15:56:28.743638] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:42.893 [2024-07-15 15:56:28.743650] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:42.893 passed 00:03:42.893 Test: mem map registration ...[2024-07-15 15:56:28.786906] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:42.893 [2024-07-15 15:56:28.786925] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:42.893 passed 00:03:42.893 Test: mem map adjacent registrations ...passed 00:03:42.893 00:03:42.893 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.893 suites 1 1 n/a 0 0 00:03:42.893 tests 4 4 4 0 0 00:03:42.893 asserts 152 152 152 0 n/a 00:03:42.893 00:03:42.893 Elapsed time = 0.145 seconds 00:03:42.893 00:03:42.893 real 0m0.153s 00:03:42.893 user 0m0.140s 00:03:42.893 sys 0m0.012s 00:03:42.893 15:56:28 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.893 15:56:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:42.893 ************************************ 00:03:42.893 END TEST env_memory 00:03:42.893 ************************************ 00:03:42.893 15:56:28 env -- common/autotest_common.sh@1142 -- # return 0 00:03:42.893 15:56:28 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:42.893 15:56:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.893 15:56:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.893 15:56:28 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.893 ************************************ 00:03:42.893 START TEST env_vtophys 00:03:42.893 ************************************ 00:03:42.893 15:56:28 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:43.153 EAL: lib.eal log level changed from notice to debug 00:03:43.153 EAL: Detected lcore 0 as core 0 on socket 0 00:03:43.153 EAL: Detected lcore 1 as core 1 on socket 0 00:03:43.153 EAL: Detected lcore 2 as core 2 on socket 0 00:03:43.153 EAL: Detected lcore 3 as core 3 on socket 0 00:03:43.153 EAL: Detected lcore 4 as core 4 on socket 0 00:03:43.153 EAL: Detected lcore 5 as core 5 on socket 0 00:03:43.153 EAL: Detected lcore 6 as core 8 on socket 0 00:03:43.153 EAL: Detected lcore 7 as core 9 on socket 0 00:03:43.153 EAL: Detected lcore 8 as core 10 on socket 0 00:03:43.153 EAL: Detected lcore 9 as core 11 on socket 0 00:03:43.153 EAL: Detected lcore 10 as core 12 on socket 0 00:03:43.153 EAL: Detected lcore 11 as core 13 on socket 0 00:03:43.153 EAL: Detected lcore 12 as core 0 on socket 1 00:03:43.153 EAL: Detected lcore 13 as core 1 on socket 1 00:03:43.153 EAL: Detected lcore 14 as core 2 on socket 1 00:03:43.153 EAL: Detected lcore 15 as core 3 on socket 1 00:03:43.153 EAL: Detected lcore 16 as core 4 on socket 1 00:03:43.153 EAL: Detected lcore 17 as core 5 on socket 1 00:03:43.153 EAL: Detected lcore 18 as core 8 on socket 1 00:03:43.153 EAL: Detected lcore 19 as core 9 on socket 1 00:03:43.153 EAL: Detected lcore 20 as core 10 on socket 1 00:03:43.153 EAL: Detected lcore 21 as core 11 on socket 1 00:03:43.153 EAL: Detected lcore 22 as core 12 on socket 1 00:03:43.153 EAL: Detected lcore 23 as core 13 on socket 1 00:03:43.153 EAL: Detected lcore 24 as core 0 on socket 0 00:03:43.153 EAL: Detected lcore 25 as core 1 on socket 0 00:03:43.153 EAL: Detected lcore 26 as core 2 on socket 0 00:03:43.153 EAL: Detected lcore 27 as core 3 on socket 0 00:03:43.153 EAL: Detected lcore 28 as core 4 on socket 0 00:03:43.153 EAL: Detected lcore 29 as core 5 on socket 0 00:03:43.153 EAL: Detected lcore 30 as core 8 on socket 0 00:03:43.153 EAL: Detected lcore 31 as core 9 on socket 0 00:03:43.153 EAL: Detected lcore 32 as core 10 on socket 0 00:03:43.153 EAL: Detected lcore 33 as core 11 on socket 0 00:03:43.153 EAL: Detected lcore 34 as core 12 on socket 0 00:03:43.153 EAL: Detected lcore 35 as core 13 on socket 0 00:03:43.153 EAL: Detected lcore 36 as core 0 on socket 1 00:03:43.153 EAL: Detected lcore 37 as core 1 on socket 1 00:03:43.153 EAL: Detected lcore 38 as core 2 on socket 1 00:03:43.153 EAL: Detected lcore 39 as core 3 on socket 1 00:03:43.153 EAL: Detected lcore 40 as core 4 on socket 1 00:03:43.153 EAL: Detected lcore 41 as core 5 on socket 1 00:03:43.153 EAL: Detected lcore 42 as core 8 on socket 1 00:03:43.153 EAL: Detected lcore 43 as core 9 on socket 1 00:03:43.153 EAL: Detected lcore 44 as core 10 on socket 1 00:03:43.153 EAL: Detected lcore 45 as core 11 on socket 1 00:03:43.153 EAL: Detected lcore 46 as core 12 on socket 1 00:03:43.153 EAL: Detected lcore 47 as core 13 on socket 1 00:03:43.153 EAL: Maximum logical cores by configuration: 128 00:03:43.153 EAL: Detected CPU lcores: 48 00:03:43.153 EAL: Detected NUMA nodes: 2 00:03:43.153 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:43.153 EAL: Detected shared linkage of DPDK 00:03:43.153 EAL: No shared files mode enabled, IPC will be disabled 00:03:43.153 EAL: Bus pci wants IOVA as 'DC' 00:03:43.153 EAL: Buses did not request a specific IOVA mode. 00:03:43.153 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:43.153 EAL: Selected IOVA mode 'VA' 00:03:43.153 EAL: No free 2048 kB hugepages reported on node 1 00:03:43.153 EAL: Probing VFIO support... 00:03:43.153 EAL: IOMMU type 1 (Type 1) is supported 00:03:43.153 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:43.153 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:43.153 EAL: VFIO support initialized 00:03:43.153 EAL: Ask a virtual area of 0x2e000 bytes 00:03:43.153 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:43.153 EAL: Setting up physically contiguous memory... 00:03:43.153 EAL: Setting maximum number of open files to 524288 00:03:43.153 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:43.153 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:43.153 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:43.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.153 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:43.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.153 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:43.153 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:43.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.153 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:43.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.153 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:43.153 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:43.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.153 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:43.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.153 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:43.153 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:43.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.153 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:43.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.153 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:43.153 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:43.153 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:43.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.153 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:43.153 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.153 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:43.153 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:43.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.153 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:43.153 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.153 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:43.154 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:43.154 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.154 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:43.154 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.154 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.154 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:43.154 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:43.154 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.154 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:43.154 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.154 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.154 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:43.154 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:43.154 EAL: Hugepages will be freed exactly as allocated. 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: TSC frequency is ~2700000 KHz 00:03:43.154 EAL: Main lcore 0 is ready (tid=7fbca84a1a00;cpuset=[0]) 00:03:43.154 EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.154 EAL: Restoring previous memory policy: 0 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was expanded by 2MB 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:43.154 EAL: Mem event callback 'spdk:(nil)' registered 00:03:43.154 00:03:43.154 00:03:43.154 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.154 http://cunit.sourceforge.net/ 00:03:43.154 00:03:43.154 00:03:43.154 Suite: components_suite 00:03:43.154 Test: vtophys_malloc_test ...passed 00:03:43.154 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.154 EAL: Restoring previous memory policy: 4 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was expanded by 4MB 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was shrunk by 4MB 00:03:43.154 EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.154 EAL: Restoring previous memory policy: 4 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was expanded by 6MB 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was shrunk by 6MB 00:03:43.154 EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.154 EAL: Restoring previous memory policy: 4 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was expanded by 10MB 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was shrunk by 10MB 00:03:43.154 EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.154 EAL: Restoring previous memory policy: 4 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was expanded by 18MB 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was shrunk by 18MB 00:03:43.154 EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.154 EAL: Restoring previous memory policy: 4 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was expanded by 34MB 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was shrunk by 34MB 00:03:43.154 EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.154 EAL: Restoring previous memory policy: 4 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was expanded by 66MB 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was shrunk by 66MB 00:03:43.154 EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.154 EAL: Restoring previous memory policy: 4 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was expanded by 130MB 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was shrunk by 130MB 00:03:43.154 EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.414 EAL: Restoring previous memory policy: 4 00:03:43.414 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.414 EAL: request: mp_malloc_sync 00:03:43.414 EAL: No shared files mode enabled, IPC is disabled 00:03:43.414 EAL: Heap on socket 0 was expanded by 258MB 00:03:43.414 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.414 EAL: request: mp_malloc_sync 00:03:43.414 EAL: No shared files mode enabled, IPC is disabled 00:03:43.414 EAL: Heap on socket 0 was shrunk by 258MB 00:03:43.414 EAL: Trying to obtain current memory policy. 00:03:43.414 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.673 EAL: Restoring previous memory policy: 4 00:03:43.673 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.673 EAL: request: mp_malloc_sync 00:03:43.673 EAL: No shared files mode enabled, IPC is disabled 00:03:43.673 EAL: Heap on socket 0 was expanded by 514MB 00:03:43.673 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.673 EAL: request: mp_malloc_sync 00:03:43.673 EAL: No shared files mode enabled, IPC is disabled 00:03:43.673 EAL: Heap on socket 0 was shrunk by 514MB 00:03:43.673 EAL: Trying to obtain current memory policy. 00:03:43.673 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.932 EAL: Restoring previous memory policy: 4 00:03:43.932 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.932 EAL: request: mp_malloc_sync 00:03:43.932 EAL: No shared files mode enabled, IPC is disabled 00:03:43.932 EAL: Heap on socket 0 was expanded by 1026MB 00:03:44.191 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.450 EAL: request: mp_malloc_sync 00:03:44.450 EAL: No shared files mode enabled, IPC is disabled 00:03:44.450 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:44.450 passed 00:03:44.450 00:03:44.450 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.450 suites 1 1 n/a 0 0 00:03:44.450 tests 2 2 2 0 0 00:03:44.450 asserts 497 497 497 0 n/a 00:03:44.450 00:03:44.450 Elapsed time = 1.311 seconds 00:03:44.450 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.450 EAL: request: mp_malloc_sync 00:03:44.450 EAL: No shared files mode enabled, IPC is disabled 00:03:44.450 EAL: Heap on socket 0 was shrunk by 2MB 00:03:44.450 EAL: No shared files mode enabled, IPC is disabled 00:03:44.450 EAL: No shared files mode enabled, IPC is disabled 00:03:44.450 EAL: No shared files mode enabled, IPC is disabled 00:03:44.450 00:03:44.450 real 0m1.421s 00:03:44.450 user 0m0.839s 00:03:44.450 sys 0m0.551s 00:03:44.450 15:56:30 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.450 15:56:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:44.450 ************************************ 00:03:44.450 END TEST env_vtophys 00:03:44.450 ************************************ 00:03:44.450 15:56:30 env -- common/autotest_common.sh@1142 -- # return 0 00:03:44.450 15:56:30 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:44.450 15:56:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.450 15:56:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.450 15:56:30 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.450 ************************************ 00:03:44.450 START TEST env_pci 00:03:44.450 ************************************ 00:03:44.450 15:56:30 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:44.450 00:03:44.450 00:03:44.450 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.450 http://cunit.sourceforge.net/ 00:03:44.450 00:03:44.450 00:03:44.450 Suite: pci 00:03:44.450 Test: pci_hook ...[2024-07-15 15:56:30.370315] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 658687 has claimed it 00:03:44.450 EAL: Cannot find device (10000:00:01.0) 00:03:44.450 EAL: Failed to attach device on primary process 00:03:44.450 passed 00:03:44.450 00:03:44.450 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.450 suites 1 1 n/a 0 0 00:03:44.450 tests 1 1 1 0 0 00:03:44.450 asserts 25 25 25 0 n/a 00:03:44.450 00:03:44.450 Elapsed time = 0.021 seconds 00:03:44.450 00:03:44.450 real 0m0.034s 00:03:44.450 user 0m0.012s 00:03:44.450 sys 0m0.021s 00:03:44.450 15:56:30 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.450 15:56:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:44.450 ************************************ 00:03:44.450 END TEST env_pci 00:03:44.450 ************************************ 00:03:44.450 15:56:30 env -- common/autotest_common.sh@1142 -- # return 0 00:03:44.450 15:56:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:44.450 15:56:30 env -- env/env.sh@15 -- # uname 00:03:44.450 15:56:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:44.450 15:56:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:44.450 15:56:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:44.450 15:56:30 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:44.450 15:56:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.450 15:56:30 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.450 ************************************ 00:03:44.450 START TEST env_dpdk_post_init 00:03:44.450 ************************************ 00:03:44.450 15:56:30 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:44.709 EAL: Detected CPU lcores: 48 00:03:44.709 EAL: Detected NUMA nodes: 2 00:03:44.709 EAL: Detected shared linkage of DPDK 00:03:44.709 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:44.709 EAL: Selected IOVA mode 'VA' 00:03:44.709 EAL: No free 2048 kB hugepages reported on node 1 00:03:44.709 EAL: VFIO support initialized 00:03:44.709 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:44.709 EAL: Using IOMMU type 1 (Type 1) 00:03:44.709 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:44.709 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:44.709 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:44.709 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:44.709 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:44.709 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:44.709 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:44.709 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:45.645 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:03:45.645 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:45.645 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:45.645 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:45.645 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:45.645 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:45.645 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:45.645 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:45.645 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:48.936 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:03:48.936 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:03:48.936 Starting DPDK initialization... 00:03:48.936 Starting SPDK post initialization... 00:03:48.936 SPDK NVMe probe 00:03:48.936 Attaching to 0000:0b:00.0 00:03:48.936 Attached to 0000:0b:00.0 00:03:48.936 Cleaning up... 00:03:48.936 00:03:48.936 real 0m4.360s 00:03:48.936 user 0m3.219s 00:03:48.936 sys 0m0.198s 00:03:48.936 15:56:34 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.936 15:56:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:48.936 ************************************ 00:03:48.936 END TEST env_dpdk_post_init 00:03:48.936 ************************************ 00:03:48.936 15:56:34 env -- common/autotest_common.sh@1142 -- # return 0 00:03:48.936 15:56:34 env -- env/env.sh@26 -- # uname 00:03:48.936 15:56:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:48.936 15:56:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.936 15:56:34 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.936 15:56:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.936 15:56:34 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.936 ************************************ 00:03:48.936 START TEST env_mem_callbacks 00:03:48.936 ************************************ 00:03:48.936 15:56:34 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.936 EAL: Detected CPU lcores: 48 00:03:48.936 EAL: Detected NUMA nodes: 2 00:03:48.936 EAL: Detected shared linkage of DPDK 00:03:48.936 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:48.936 EAL: Selected IOVA mode 'VA' 00:03:48.936 EAL: No free 2048 kB hugepages reported on node 1 00:03:48.936 EAL: VFIO support initialized 00:03:48.936 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:48.936 00:03:48.936 00:03:48.936 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.936 http://cunit.sourceforge.net/ 00:03:48.936 00:03:48.936 00:03:48.936 Suite: memory 00:03:48.936 Test: test ... 00:03:48.936 register 0x200000200000 2097152 00:03:48.936 malloc 3145728 00:03:48.936 register 0x200000400000 4194304 00:03:48.936 buf 0x200000500000 len 3145728 PASSED 00:03:48.936 malloc 64 00:03:48.936 buf 0x2000004fff40 len 64 PASSED 00:03:48.936 malloc 4194304 00:03:48.936 register 0x200000800000 6291456 00:03:48.936 buf 0x200000a00000 len 4194304 PASSED 00:03:48.936 free 0x200000500000 3145728 00:03:48.936 free 0x2000004fff40 64 00:03:48.936 unregister 0x200000400000 4194304 PASSED 00:03:48.936 free 0x200000a00000 4194304 00:03:48.936 unregister 0x200000800000 6291456 PASSED 00:03:48.936 malloc 8388608 00:03:48.936 register 0x200000400000 10485760 00:03:48.936 buf 0x200000600000 len 8388608 PASSED 00:03:48.936 free 0x200000600000 8388608 00:03:48.936 unregister 0x200000400000 10485760 PASSED 00:03:48.936 passed 00:03:48.936 00:03:48.936 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.936 suites 1 1 n/a 0 0 00:03:48.936 tests 1 1 1 0 0 00:03:48.936 asserts 15 15 15 0 n/a 00:03:48.936 00:03:48.936 Elapsed time = 0.005 seconds 00:03:48.936 00:03:48.936 real 0m0.049s 00:03:48.936 user 0m0.013s 00:03:48.936 sys 0m0.035s 00:03:48.936 15:56:34 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.937 15:56:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:48.937 ************************************ 00:03:48.937 END TEST env_mem_callbacks 00:03:48.937 ************************************ 00:03:48.937 15:56:34 env -- common/autotest_common.sh@1142 -- # return 0 00:03:48.937 00:03:48.937 real 0m6.315s 00:03:48.937 user 0m4.342s 00:03:48.937 sys 0m1.015s 00:03:48.937 15:56:34 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.937 15:56:34 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.937 ************************************ 00:03:48.937 END TEST env 00:03:48.937 ************************************ 00:03:49.194 15:56:34 -- common/autotest_common.sh@1142 -- # return 0 00:03:49.194 15:56:34 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:49.194 15:56:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.194 15:56:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.194 15:56:34 -- common/autotest_common.sh@10 -- # set +x 00:03:49.194 ************************************ 00:03:49.194 START TEST rpc 00:03:49.194 ************************************ 00:03:49.194 15:56:34 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:49.194 * Looking for test storage... 00:03:49.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.194 15:56:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=659348 00:03:49.194 15:56:35 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:49.194 15:56:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.194 15:56:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 659348 00:03:49.194 15:56:35 rpc -- common/autotest_common.sh@829 -- # '[' -z 659348 ']' 00:03:49.194 15:56:35 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:49.194 15:56:35 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:49.194 15:56:35 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:49.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:49.194 15:56:35 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:49.194 15:56:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.194 [2024-07-15 15:56:35.073428] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:03:49.194 [2024-07-15 15:56:35.073533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659348 ] 00:03:49.194 EAL: No free 2048 kB hugepages reported on node 1 00:03:49.194 [2024-07-15 15:56:35.131378] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.452 [2024-07-15 15:56:35.239216] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:49.452 [2024-07-15 15:56:35.239288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 659348' to capture a snapshot of events at runtime. 00:03:49.452 [2024-07-15 15:56:35.239303] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:49.452 [2024-07-15 15:56:35.239315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:49.452 [2024-07-15 15:56:35.239324] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid659348 for offline analysis/debug. 00:03:49.452 [2024-07-15 15:56:35.239365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.711 15:56:35 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:49.711 15:56:35 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:49.711 15:56:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.711 15:56:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.711 15:56:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:49.711 15:56:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:49.711 15:56:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.711 15:56:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.711 15:56:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.711 ************************************ 00:03:49.711 START TEST rpc_integrity 00:03:49.712 ************************************ 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:49.712 { 00:03:49.712 "name": "Malloc0", 00:03:49.712 "aliases": [ 00:03:49.712 "1e55ea2b-102b-4a78-9dcf-f1c0d9fcdcdb" 00:03:49.712 ], 00:03:49.712 "product_name": "Malloc disk", 00:03:49.712 "block_size": 512, 00:03:49.712 "num_blocks": 16384, 00:03:49.712 "uuid": "1e55ea2b-102b-4a78-9dcf-f1c0d9fcdcdb", 00:03:49.712 "assigned_rate_limits": { 00:03:49.712 "rw_ios_per_sec": 0, 00:03:49.712 "rw_mbytes_per_sec": 0, 00:03:49.712 "r_mbytes_per_sec": 0, 00:03:49.712 "w_mbytes_per_sec": 0 00:03:49.712 }, 00:03:49.712 "claimed": false, 00:03:49.712 "zoned": false, 00:03:49.712 "supported_io_types": { 00:03:49.712 "read": true, 00:03:49.712 "write": true, 00:03:49.712 "unmap": true, 00:03:49.712 "flush": true, 00:03:49.712 "reset": true, 00:03:49.712 "nvme_admin": false, 00:03:49.712 "nvme_io": false, 00:03:49.712 "nvme_io_md": false, 00:03:49.712 "write_zeroes": true, 00:03:49.712 "zcopy": true, 00:03:49.712 "get_zone_info": false, 00:03:49.712 "zone_management": false, 00:03:49.712 "zone_append": false, 00:03:49.712 "compare": false, 00:03:49.712 "compare_and_write": false, 00:03:49.712 "abort": true, 00:03:49.712 "seek_hole": false, 00:03:49.712 "seek_data": false, 00:03:49.712 "copy": true, 00:03:49.712 "nvme_iov_md": false 00:03:49.712 }, 00:03:49.712 "memory_domains": [ 00:03:49.712 { 00:03:49.712 "dma_device_id": "system", 00:03:49.712 "dma_device_type": 1 00:03:49.712 }, 00:03:49.712 { 00:03:49.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.712 "dma_device_type": 2 00:03:49.712 } 00:03:49.712 ], 00:03:49.712 "driver_specific": {} 00:03:49.712 } 00:03:49.712 ]' 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.712 [2024-07-15 15:56:35.603088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:49.712 [2024-07-15 15:56:35.603126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:49.712 [2024-07-15 15:56:35.603148] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a3ad50 00:03:49.712 [2024-07-15 15:56:35.603162] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:49.712 [2024-07-15 15:56:35.604473] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:49.712 [2024-07-15 15:56:35.604494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:49.712 Passthru0 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:49.712 { 00:03:49.712 "name": "Malloc0", 00:03:49.712 "aliases": [ 00:03:49.712 "1e55ea2b-102b-4a78-9dcf-f1c0d9fcdcdb" 00:03:49.712 ], 00:03:49.712 "product_name": "Malloc disk", 00:03:49.712 "block_size": 512, 00:03:49.712 "num_blocks": 16384, 00:03:49.712 "uuid": "1e55ea2b-102b-4a78-9dcf-f1c0d9fcdcdb", 00:03:49.712 "assigned_rate_limits": { 00:03:49.712 "rw_ios_per_sec": 0, 00:03:49.712 "rw_mbytes_per_sec": 0, 00:03:49.712 "r_mbytes_per_sec": 0, 00:03:49.712 "w_mbytes_per_sec": 0 00:03:49.712 }, 00:03:49.712 "claimed": true, 00:03:49.712 "claim_type": "exclusive_write", 00:03:49.712 "zoned": false, 00:03:49.712 "supported_io_types": { 00:03:49.712 "read": true, 00:03:49.712 "write": true, 00:03:49.712 "unmap": true, 00:03:49.712 "flush": true, 00:03:49.712 "reset": true, 00:03:49.712 "nvme_admin": false, 00:03:49.712 "nvme_io": false, 00:03:49.712 "nvme_io_md": false, 00:03:49.712 "write_zeroes": true, 00:03:49.712 "zcopy": true, 00:03:49.712 "get_zone_info": false, 00:03:49.712 "zone_management": false, 00:03:49.712 "zone_append": false, 00:03:49.712 "compare": false, 00:03:49.712 "compare_and_write": false, 00:03:49.712 "abort": true, 00:03:49.712 "seek_hole": false, 00:03:49.712 "seek_data": false, 00:03:49.712 "copy": true, 00:03:49.712 "nvme_iov_md": false 00:03:49.712 }, 00:03:49.712 "memory_domains": [ 00:03:49.712 { 00:03:49.712 "dma_device_id": "system", 00:03:49.712 "dma_device_type": 1 00:03:49.712 }, 00:03:49.712 { 00:03:49.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.712 "dma_device_type": 2 00:03:49.712 } 00:03:49.712 ], 00:03:49.712 "driver_specific": {} 00:03:49.712 }, 00:03:49.712 { 00:03:49.712 "name": "Passthru0", 00:03:49.712 "aliases": [ 00:03:49.712 "a191823a-7d27-5b5f-9ea4-4428ef0d69e1" 00:03:49.712 ], 00:03:49.712 "product_name": "passthru", 00:03:49.712 "block_size": 512, 00:03:49.712 "num_blocks": 16384, 00:03:49.712 "uuid": "a191823a-7d27-5b5f-9ea4-4428ef0d69e1", 00:03:49.712 "assigned_rate_limits": { 00:03:49.712 "rw_ios_per_sec": 0, 00:03:49.712 "rw_mbytes_per_sec": 0, 00:03:49.712 "r_mbytes_per_sec": 0, 00:03:49.712 "w_mbytes_per_sec": 0 00:03:49.712 }, 00:03:49.712 "claimed": false, 00:03:49.712 "zoned": false, 00:03:49.712 "supported_io_types": { 00:03:49.712 "read": true, 00:03:49.712 "write": true, 00:03:49.712 "unmap": true, 00:03:49.712 "flush": true, 00:03:49.712 "reset": true, 00:03:49.712 "nvme_admin": false, 00:03:49.712 "nvme_io": false, 00:03:49.712 "nvme_io_md": false, 00:03:49.712 "write_zeroes": true, 00:03:49.712 "zcopy": true, 00:03:49.712 "get_zone_info": false, 00:03:49.712 "zone_management": false, 00:03:49.712 "zone_append": false, 00:03:49.712 "compare": false, 00:03:49.712 "compare_and_write": false, 00:03:49.712 "abort": true, 00:03:49.712 "seek_hole": false, 00:03:49.712 "seek_data": false, 00:03:49.712 "copy": true, 00:03:49.712 "nvme_iov_md": false 00:03:49.712 }, 00:03:49.712 "memory_domains": [ 00:03:49.712 { 00:03:49.712 "dma_device_id": "system", 00:03:49.712 "dma_device_type": 1 00:03:49.712 }, 00:03:49.712 { 00:03:49.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.712 "dma_device_type": 2 00:03:49.712 } 00:03:49.712 ], 00:03:49.712 "driver_specific": { 00:03:49.712 "passthru": { 00:03:49.712 "name": "Passthru0", 00:03:49.712 "base_bdev_name": "Malloc0" 00:03:49.712 } 00:03:49.712 } 00:03:49.712 } 00:03:49.712 ]' 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:49.712 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:49.712 00:03:49.712 real 0m0.205s 00:03:49.712 user 0m0.131s 00:03:49.712 sys 0m0.022s 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.712 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.712 ************************************ 00:03:49.712 END TEST rpc_integrity 00:03:49.712 ************************************ 00:03:49.971 15:56:35 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:49.971 15:56:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:49.971 15:56:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.971 15:56:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.971 15:56:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.971 ************************************ 00:03:49.971 START TEST rpc_plugins 00:03:49.971 ************************************ 00:03:49.971 15:56:35 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:49.971 15:56:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:49.971 15:56:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.971 15:56:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.971 15:56:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.971 15:56:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:49.971 15:56:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:49.971 15:56:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.971 15:56:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.971 15:56:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.971 15:56:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:49.971 { 00:03:49.971 "name": "Malloc1", 00:03:49.971 "aliases": [ 00:03:49.971 "89fe28e9-1dd0-4ee2-b6b3-0817dcf370c0" 00:03:49.971 ], 00:03:49.971 "product_name": "Malloc disk", 00:03:49.971 "block_size": 4096, 00:03:49.971 "num_blocks": 256, 00:03:49.971 "uuid": "89fe28e9-1dd0-4ee2-b6b3-0817dcf370c0", 00:03:49.971 "assigned_rate_limits": { 00:03:49.971 "rw_ios_per_sec": 0, 00:03:49.971 "rw_mbytes_per_sec": 0, 00:03:49.971 "r_mbytes_per_sec": 0, 00:03:49.971 "w_mbytes_per_sec": 0 00:03:49.971 }, 00:03:49.971 "claimed": false, 00:03:49.971 "zoned": false, 00:03:49.971 "supported_io_types": { 00:03:49.971 "read": true, 00:03:49.971 "write": true, 00:03:49.971 "unmap": true, 00:03:49.971 "flush": true, 00:03:49.971 "reset": true, 00:03:49.971 "nvme_admin": false, 00:03:49.971 "nvme_io": false, 00:03:49.971 "nvme_io_md": false, 00:03:49.971 "write_zeroes": true, 00:03:49.971 "zcopy": true, 00:03:49.971 "get_zone_info": false, 00:03:49.971 "zone_management": false, 00:03:49.971 "zone_append": false, 00:03:49.971 "compare": false, 00:03:49.971 "compare_and_write": false, 00:03:49.971 "abort": true, 00:03:49.971 "seek_hole": false, 00:03:49.971 "seek_data": false, 00:03:49.971 "copy": true, 00:03:49.971 "nvme_iov_md": false 00:03:49.971 }, 00:03:49.971 "memory_domains": [ 00:03:49.971 { 00:03:49.971 "dma_device_id": "system", 00:03:49.971 "dma_device_type": 1 00:03:49.971 }, 00:03:49.971 { 00:03:49.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.971 "dma_device_type": 2 00:03:49.971 } 00:03:49.971 ], 00:03:49.971 "driver_specific": {} 00:03:49.971 } 00:03:49.971 ]' 00:03:49.971 15:56:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:49.971 15:56:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:49.971 15:56:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:49.971 15:56:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.971 15:56:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.971 15:56:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.971 15:56:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:49.971 15:56:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.971 15:56:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.971 15:56:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.971 15:56:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:49.971 15:56:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:49.971 15:56:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:49.971 00:03:49.971 real 0m0.104s 00:03:49.971 user 0m0.066s 00:03:49.971 sys 0m0.010s 00:03:49.971 15:56:35 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.971 15:56:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.971 ************************************ 00:03:49.971 END TEST rpc_plugins 00:03:49.971 ************************************ 00:03:49.971 15:56:35 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:49.971 15:56:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:49.971 15:56:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.971 15:56:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.971 15:56:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.971 ************************************ 00:03:49.971 START TEST rpc_trace_cmd_test 00:03:49.971 ************************************ 00:03:49.971 15:56:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:49.971 15:56:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:49.971 15:56:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:49.971 15:56:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.971 15:56:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:49.971 15:56:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.971 15:56:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:49.971 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid659348", 00:03:49.971 "tpoint_group_mask": "0x8", 00:03:49.971 "iscsi_conn": { 00:03:49.971 "mask": "0x2", 00:03:49.971 "tpoint_mask": "0x0" 00:03:49.971 }, 00:03:49.971 "scsi": { 00:03:49.971 "mask": "0x4", 00:03:49.971 "tpoint_mask": "0x0" 00:03:49.971 }, 00:03:49.971 "bdev": { 00:03:49.971 "mask": "0x8", 00:03:49.971 "tpoint_mask": "0xffffffffffffffff" 00:03:49.971 }, 00:03:49.971 "nvmf_rdma": { 00:03:49.971 "mask": "0x10", 00:03:49.971 "tpoint_mask": "0x0" 00:03:49.971 }, 00:03:49.971 "nvmf_tcp": { 00:03:49.971 "mask": "0x20", 00:03:49.971 "tpoint_mask": "0x0" 00:03:49.971 }, 00:03:49.971 "ftl": { 00:03:49.971 "mask": "0x40", 00:03:49.971 "tpoint_mask": "0x0" 00:03:49.971 }, 00:03:49.971 "blobfs": { 00:03:49.971 "mask": "0x80", 00:03:49.971 "tpoint_mask": "0x0" 00:03:49.971 }, 00:03:49.971 "dsa": { 00:03:49.971 "mask": "0x200", 00:03:49.971 "tpoint_mask": "0x0" 00:03:49.971 }, 00:03:49.971 "thread": { 00:03:49.971 "mask": "0x400", 00:03:49.971 "tpoint_mask": "0x0" 00:03:49.971 }, 00:03:49.971 "nvme_pcie": { 00:03:49.971 "mask": "0x800", 00:03:49.971 "tpoint_mask": "0x0" 00:03:49.971 }, 00:03:49.972 "iaa": { 00:03:49.972 "mask": "0x1000", 00:03:49.972 "tpoint_mask": "0x0" 00:03:49.972 }, 00:03:49.972 "nvme_tcp": { 00:03:49.972 "mask": "0x2000", 00:03:49.972 "tpoint_mask": "0x0" 00:03:49.972 }, 00:03:49.972 "bdev_nvme": { 00:03:49.972 "mask": "0x4000", 00:03:49.972 "tpoint_mask": "0x0" 00:03:49.972 }, 00:03:49.972 "sock": { 00:03:49.972 "mask": "0x8000", 00:03:49.972 "tpoint_mask": "0x0" 00:03:49.972 } 00:03:49.972 }' 00:03:49.972 15:56:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:49.972 15:56:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:49.972 15:56:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:50.230 15:56:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:50.230 15:56:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:50.230 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:50.230 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:50.230 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:50.230 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:50.230 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:50.230 00:03:50.230 real 0m0.188s 00:03:50.230 user 0m0.167s 00:03:50.230 sys 0m0.010s 00:03:50.230 15:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.230 15:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:50.230 ************************************ 00:03:50.230 END TEST rpc_trace_cmd_test 00:03:50.230 ************************************ 00:03:50.230 15:56:36 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:50.230 15:56:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:50.230 15:56:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:50.230 15:56:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:50.230 15:56:36 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.230 15:56:36 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.230 15:56:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.230 ************************************ 00:03:50.230 START TEST rpc_daemon_integrity 00:03:50.230 ************************************ 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:50.230 { 00:03:50.230 "name": "Malloc2", 00:03:50.230 "aliases": [ 00:03:50.230 "3196a571-aec6-4e81-b6e7-6db4201ce077" 00:03:50.230 ], 00:03:50.230 "product_name": "Malloc disk", 00:03:50.230 "block_size": 512, 00:03:50.230 "num_blocks": 16384, 00:03:50.230 "uuid": "3196a571-aec6-4e81-b6e7-6db4201ce077", 00:03:50.230 "assigned_rate_limits": { 00:03:50.230 "rw_ios_per_sec": 0, 00:03:50.230 "rw_mbytes_per_sec": 0, 00:03:50.230 "r_mbytes_per_sec": 0, 00:03:50.230 "w_mbytes_per_sec": 0 00:03:50.230 }, 00:03:50.230 "claimed": false, 00:03:50.230 "zoned": false, 00:03:50.230 "supported_io_types": { 00:03:50.230 "read": true, 00:03:50.230 "write": true, 00:03:50.230 "unmap": true, 00:03:50.230 "flush": true, 00:03:50.230 "reset": true, 00:03:50.230 "nvme_admin": false, 00:03:50.230 "nvme_io": false, 00:03:50.230 "nvme_io_md": false, 00:03:50.230 "write_zeroes": true, 00:03:50.230 "zcopy": true, 00:03:50.230 "get_zone_info": false, 00:03:50.230 "zone_management": false, 00:03:50.230 "zone_append": false, 00:03:50.230 "compare": false, 00:03:50.230 "compare_and_write": false, 00:03:50.230 "abort": true, 00:03:50.230 "seek_hole": false, 00:03:50.230 "seek_data": false, 00:03:50.230 "copy": true, 00:03:50.230 "nvme_iov_md": false 00:03:50.230 }, 00:03:50.230 "memory_domains": [ 00:03:50.230 { 00:03:50.230 "dma_device_id": "system", 00:03:50.230 "dma_device_type": 1 00:03:50.230 }, 00:03:50.230 { 00:03:50.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.230 "dma_device_type": 2 00:03:50.230 } 00:03:50.230 ], 00:03:50.230 "driver_specific": {} 00:03:50.230 } 00:03:50.230 ]' 00:03:50.230 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.488 [2024-07-15 15:56:36.236875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:50.488 [2024-07-15 15:56:36.236912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:50.488 [2024-07-15 15:56:36.236932] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a3bc00 00:03:50.488 [2024-07-15 15:56:36.236969] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:50.488 [2024-07-15 15:56:36.238159] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:50.488 [2024-07-15 15:56:36.238184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:50.488 Passthru0 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:50.488 { 00:03:50.488 "name": "Malloc2", 00:03:50.488 "aliases": [ 00:03:50.488 "3196a571-aec6-4e81-b6e7-6db4201ce077" 00:03:50.488 ], 00:03:50.488 "product_name": "Malloc disk", 00:03:50.488 "block_size": 512, 00:03:50.488 "num_blocks": 16384, 00:03:50.488 "uuid": "3196a571-aec6-4e81-b6e7-6db4201ce077", 00:03:50.488 "assigned_rate_limits": { 00:03:50.488 "rw_ios_per_sec": 0, 00:03:50.488 "rw_mbytes_per_sec": 0, 00:03:50.488 "r_mbytes_per_sec": 0, 00:03:50.488 "w_mbytes_per_sec": 0 00:03:50.488 }, 00:03:50.488 "claimed": true, 00:03:50.488 "claim_type": "exclusive_write", 00:03:50.488 "zoned": false, 00:03:50.488 "supported_io_types": { 00:03:50.488 "read": true, 00:03:50.488 "write": true, 00:03:50.488 "unmap": true, 00:03:50.488 "flush": true, 00:03:50.488 "reset": true, 00:03:50.488 "nvme_admin": false, 00:03:50.488 "nvme_io": false, 00:03:50.488 "nvme_io_md": false, 00:03:50.488 "write_zeroes": true, 00:03:50.488 "zcopy": true, 00:03:50.488 "get_zone_info": false, 00:03:50.488 "zone_management": false, 00:03:50.488 "zone_append": false, 00:03:50.488 "compare": false, 00:03:50.488 "compare_and_write": false, 00:03:50.488 "abort": true, 00:03:50.488 "seek_hole": false, 00:03:50.488 "seek_data": false, 00:03:50.488 "copy": true, 00:03:50.488 "nvme_iov_md": false 00:03:50.488 }, 00:03:50.488 "memory_domains": [ 00:03:50.488 { 00:03:50.488 "dma_device_id": "system", 00:03:50.488 "dma_device_type": 1 00:03:50.488 }, 00:03:50.488 { 00:03:50.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.488 "dma_device_type": 2 00:03:50.488 } 00:03:50.488 ], 00:03:50.488 "driver_specific": {} 00:03:50.488 }, 00:03:50.488 { 00:03:50.488 "name": "Passthru0", 00:03:50.488 "aliases": [ 00:03:50.488 "372ee013-5806-547e-9011-a214c0352c10" 00:03:50.488 ], 00:03:50.488 "product_name": "passthru", 00:03:50.488 "block_size": 512, 00:03:50.488 "num_blocks": 16384, 00:03:50.488 "uuid": "372ee013-5806-547e-9011-a214c0352c10", 00:03:50.488 "assigned_rate_limits": { 00:03:50.488 "rw_ios_per_sec": 0, 00:03:50.488 "rw_mbytes_per_sec": 0, 00:03:50.488 "r_mbytes_per_sec": 0, 00:03:50.488 "w_mbytes_per_sec": 0 00:03:50.488 }, 00:03:50.488 "claimed": false, 00:03:50.488 "zoned": false, 00:03:50.488 "supported_io_types": { 00:03:50.488 "read": true, 00:03:50.488 "write": true, 00:03:50.488 "unmap": true, 00:03:50.488 "flush": true, 00:03:50.488 "reset": true, 00:03:50.488 "nvme_admin": false, 00:03:50.488 "nvme_io": false, 00:03:50.488 "nvme_io_md": false, 00:03:50.488 "write_zeroes": true, 00:03:50.488 "zcopy": true, 00:03:50.488 "get_zone_info": false, 00:03:50.488 "zone_management": false, 00:03:50.488 "zone_append": false, 00:03:50.488 "compare": false, 00:03:50.488 "compare_and_write": false, 00:03:50.488 "abort": true, 00:03:50.488 "seek_hole": false, 00:03:50.488 "seek_data": false, 00:03:50.488 "copy": true, 00:03:50.488 "nvme_iov_md": false 00:03:50.488 }, 00:03:50.488 "memory_domains": [ 00:03:50.488 { 00:03:50.488 "dma_device_id": "system", 00:03:50.488 "dma_device_type": 1 00:03:50.488 }, 00:03:50.488 { 00:03:50.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.488 "dma_device_type": 2 00:03:50.488 } 00:03:50.488 ], 00:03:50.488 "driver_specific": { 00:03:50.488 "passthru": { 00:03:50.488 "name": "Passthru0", 00:03:50.488 "base_bdev_name": "Malloc2" 00:03:50.488 } 00:03:50.488 } 00:03:50.488 } 00:03:50.488 ]' 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:50.488 00:03:50.488 real 0m0.210s 00:03:50.488 user 0m0.134s 00:03:50.488 sys 0m0.024s 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.488 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.488 ************************************ 00:03:50.488 END TEST rpc_daemon_integrity 00:03:50.488 ************************************ 00:03:50.488 15:56:36 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:50.488 15:56:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:50.488 15:56:36 rpc -- rpc/rpc.sh@84 -- # killprocess 659348 00:03:50.488 15:56:36 rpc -- common/autotest_common.sh@948 -- # '[' -z 659348 ']' 00:03:50.488 15:56:36 rpc -- common/autotest_common.sh@952 -- # kill -0 659348 00:03:50.488 15:56:36 rpc -- common/autotest_common.sh@953 -- # uname 00:03:50.488 15:56:36 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:50.488 15:56:36 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 659348 00:03:50.488 15:56:36 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:50.488 15:56:36 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:50.488 15:56:36 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 659348' 00:03:50.488 killing process with pid 659348 00:03:50.488 15:56:36 rpc -- common/autotest_common.sh@967 -- # kill 659348 00:03:50.488 15:56:36 rpc -- common/autotest_common.sh@972 -- # wait 659348 00:03:51.071 00:03:51.071 real 0m1.842s 00:03:51.071 user 0m2.294s 00:03:51.071 sys 0m0.542s 00:03:51.071 15:56:36 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.071 15:56:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.071 ************************************ 00:03:51.071 END TEST rpc 00:03:51.071 ************************************ 00:03:51.071 15:56:36 -- common/autotest_common.sh@1142 -- # return 0 00:03:51.071 15:56:36 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:51.071 15:56:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.071 15:56:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.071 15:56:36 -- common/autotest_common.sh@10 -- # set +x 00:03:51.071 ************************************ 00:03:51.071 START TEST skip_rpc 00:03:51.071 ************************************ 00:03:51.071 15:56:36 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:51.071 * Looking for test storage... 00:03:51.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:51.071 15:56:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:51.071 15:56:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:51.071 15:56:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:51.071 15:56:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.071 15:56:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.071 15:56:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.071 ************************************ 00:03:51.071 START TEST skip_rpc 00:03:51.071 ************************************ 00:03:51.071 15:56:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:51.071 15:56:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=659783 00:03:51.071 15:56:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:51.071 15:56:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:51.071 15:56:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:51.071 [2024-07-15 15:56:36.994869] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:03:51.071 [2024-07-15 15:56:36.994946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659783 ] 00:03:51.071 EAL: No free 2048 kB hugepages reported on node 1 00:03:51.071 [2024-07-15 15:56:37.050532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.327 [2024-07-15 15:56:37.151344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 659783 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 659783 ']' 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 659783 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 659783 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 659783' 00:03:56.601 killing process with pid 659783 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 659783 00:03:56.601 15:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 659783 00:03:56.601 00:03:56.601 real 0m5.457s 00:03:56.601 user 0m5.173s 00:03:56.601 sys 0m0.285s 00:03:56.601 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.601 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.601 ************************************ 00:03:56.601 END TEST skip_rpc 00:03:56.601 ************************************ 00:03:56.601 15:56:42 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:56.601 15:56:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:56.601 15:56:42 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.601 15:56:42 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.601 15:56:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.601 ************************************ 00:03:56.601 START TEST skip_rpc_with_json 00:03:56.601 ************************************ 00:03:56.601 15:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:03:56.601 15:56:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:56.601 15:56:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=660470 00:03:56.601 15:56:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:56.601 15:56:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.601 15:56:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 660470 00:03:56.601 15:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 660470 ']' 00:03:56.601 15:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.601 15:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:56.601 15:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.601 15:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:56.601 15:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.601 [2024-07-15 15:56:42.502988] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:03:56.601 [2024-07-15 15:56:42.503095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660470 ] 00:03:56.601 EAL: No free 2048 kB hugepages reported on node 1 00:03:56.601 [2024-07-15 15:56:42.560530] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.861 [2024-07-15 15:56:42.671017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.120 15:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:57.120 15:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:03:57.120 15:56:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:57.120 15:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.120 15:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.120 [2024-07-15 15:56:42.914021] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:57.120 request: 00:03:57.120 { 00:03:57.120 "trtype": "tcp", 00:03:57.120 "method": "nvmf_get_transports", 00:03:57.120 "req_id": 1 00:03:57.120 } 00:03:57.120 Got JSON-RPC error response 00:03:57.120 response: 00:03:57.120 { 00:03:57.120 "code": -19, 00:03:57.120 "message": "No such device" 00:03:57.120 } 00:03:57.120 15:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:57.120 15:56:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:57.120 15:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.120 15:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.120 [2024-07-15 15:56:42.922125] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:57.120 15:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.120 15:56:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:57.120 15:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.120 15:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.120 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.120 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:57.120 { 00:03:57.120 "subsystems": [ 00:03:57.120 { 00:03:57.120 "subsystem": "vfio_user_target", 00:03:57.120 "config": null 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "subsystem": "keyring", 00:03:57.120 "config": [] 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "subsystem": "iobuf", 00:03:57.120 "config": [ 00:03:57.120 { 00:03:57.120 "method": "iobuf_set_options", 00:03:57.120 "params": { 00:03:57.120 "small_pool_count": 8192, 00:03:57.120 "large_pool_count": 1024, 00:03:57.120 "small_bufsize": 8192, 00:03:57.120 "large_bufsize": 135168 00:03:57.120 } 00:03:57.120 } 00:03:57.120 ] 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "subsystem": "sock", 00:03:57.120 "config": [ 00:03:57.120 { 00:03:57.120 "method": "sock_set_default_impl", 00:03:57.120 "params": { 00:03:57.120 "impl_name": "posix" 00:03:57.120 } 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "method": "sock_impl_set_options", 00:03:57.120 "params": { 00:03:57.120 "impl_name": "ssl", 00:03:57.120 "recv_buf_size": 4096, 00:03:57.120 "send_buf_size": 4096, 00:03:57.120 "enable_recv_pipe": true, 00:03:57.120 "enable_quickack": false, 00:03:57.120 "enable_placement_id": 0, 00:03:57.120 "enable_zerocopy_send_server": true, 00:03:57.120 "enable_zerocopy_send_client": false, 00:03:57.120 "zerocopy_threshold": 0, 00:03:57.120 "tls_version": 0, 00:03:57.120 "enable_ktls": false 00:03:57.120 } 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "method": "sock_impl_set_options", 00:03:57.120 "params": { 00:03:57.120 "impl_name": "posix", 00:03:57.120 "recv_buf_size": 2097152, 00:03:57.120 "send_buf_size": 2097152, 00:03:57.120 "enable_recv_pipe": true, 00:03:57.120 "enable_quickack": false, 00:03:57.120 "enable_placement_id": 0, 00:03:57.120 "enable_zerocopy_send_server": true, 00:03:57.120 "enable_zerocopy_send_client": false, 00:03:57.120 "zerocopy_threshold": 0, 00:03:57.120 "tls_version": 0, 00:03:57.120 "enable_ktls": false 00:03:57.120 } 00:03:57.120 } 00:03:57.120 ] 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "subsystem": "vmd", 00:03:57.120 "config": [] 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "subsystem": "accel", 00:03:57.120 "config": [ 00:03:57.120 { 00:03:57.120 "method": "accel_set_options", 00:03:57.120 "params": { 00:03:57.120 "small_cache_size": 128, 00:03:57.120 "large_cache_size": 16, 00:03:57.120 "task_count": 2048, 00:03:57.120 "sequence_count": 2048, 00:03:57.120 "buf_count": 2048 00:03:57.120 } 00:03:57.120 } 00:03:57.120 ] 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "subsystem": "bdev", 00:03:57.120 "config": [ 00:03:57.120 { 00:03:57.120 "method": "bdev_set_options", 00:03:57.120 "params": { 00:03:57.120 "bdev_io_pool_size": 65535, 00:03:57.120 "bdev_io_cache_size": 256, 00:03:57.120 "bdev_auto_examine": true, 00:03:57.120 "iobuf_small_cache_size": 128, 00:03:57.120 "iobuf_large_cache_size": 16 00:03:57.120 } 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "method": "bdev_raid_set_options", 00:03:57.120 "params": { 00:03:57.120 "process_window_size_kb": 1024 00:03:57.120 } 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "method": "bdev_iscsi_set_options", 00:03:57.120 "params": { 00:03:57.120 "timeout_sec": 30 00:03:57.120 } 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "method": "bdev_nvme_set_options", 00:03:57.120 "params": { 00:03:57.120 "action_on_timeout": "none", 00:03:57.120 "timeout_us": 0, 00:03:57.120 "timeout_admin_us": 0, 00:03:57.120 "keep_alive_timeout_ms": 10000, 00:03:57.120 "arbitration_burst": 0, 00:03:57.120 "low_priority_weight": 0, 00:03:57.120 "medium_priority_weight": 0, 00:03:57.120 "high_priority_weight": 0, 00:03:57.120 "nvme_adminq_poll_period_us": 10000, 00:03:57.120 "nvme_ioq_poll_period_us": 0, 00:03:57.120 "io_queue_requests": 0, 00:03:57.120 "delay_cmd_submit": true, 00:03:57.120 "transport_retry_count": 4, 00:03:57.120 "bdev_retry_count": 3, 00:03:57.120 "transport_ack_timeout": 0, 00:03:57.120 "ctrlr_loss_timeout_sec": 0, 00:03:57.120 "reconnect_delay_sec": 0, 00:03:57.120 "fast_io_fail_timeout_sec": 0, 00:03:57.120 "disable_auto_failback": false, 00:03:57.120 "generate_uuids": false, 00:03:57.120 "transport_tos": 0, 00:03:57.120 "nvme_error_stat": false, 00:03:57.120 "rdma_srq_size": 0, 00:03:57.120 "io_path_stat": false, 00:03:57.120 "allow_accel_sequence": false, 00:03:57.120 "rdma_max_cq_size": 0, 00:03:57.120 "rdma_cm_event_timeout_ms": 0, 00:03:57.120 "dhchap_digests": [ 00:03:57.120 "sha256", 00:03:57.120 "sha384", 00:03:57.120 "sha512" 00:03:57.120 ], 00:03:57.120 "dhchap_dhgroups": [ 00:03:57.120 "null", 00:03:57.120 "ffdhe2048", 00:03:57.120 "ffdhe3072", 00:03:57.120 "ffdhe4096", 00:03:57.120 "ffdhe6144", 00:03:57.120 "ffdhe8192" 00:03:57.120 ] 00:03:57.120 } 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "method": "bdev_nvme_set_hotplug", 00:03:57.120 "params": { 00:03:57.120 "period_us": 100000, 00:03:57.120 "enable": false 00:03:57.120 } 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "method": "bdev_wait_for_examine" 00:03:57.120 } 00:03:57.120 ] 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "subsystem": "scsi", 00:03:57.120 "config": null 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "subsystem": "scheduler", 00:03:57.120 "config": [ 00:03:57.120 { 00:03:57.120 "method": "framework_set_scheduler", 00:03:57.120 "params": { 00:03:57.120 "name": "static" 00:03:57.120 } 00:03:57.120 } 00:03:57.120 ] 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "subsystem": "vhost_scsi", 00:03:57.120 "config": [] 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "subsystem": "vhost_blk", 00:03:57.120 "config": [] 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "subsystem": "ublk", 00:03:57.120 "config": [] 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "subsystem": "nbd", 00:03:57.120 "config": [] 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "subsystem": "nvmf", 00:03:57.120 "config": [ 00:03:57.120 { 00:03:57.120 "method": "nvmf_set_config", 00:03:57.120 "params": { 00:03:57.120 "discovery_filter": "match_any", 00:03:57.120 "admin_cmd_passthru": { 00:03:57.120 "identify_ctrlr": false 00:03:57.120 } 00:03:57.120 } 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "method": "nvmf_set_max_subsystems", 00:03:57.120 "params": { 00:03:57.120 "max_subsystems": 1024 00:03:57.120 } 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "method": "nvmf_set_crdt", 00:03:57.120 "params": { 00:03:57.120 "crdt1": 0, 00:03:57.120 "crdt2": 0, 00:03:57.120 "crdt3": 0 00:03:57.120 } 00:03:57.120 }, 00:03:57.120 { 00:03:57.120 "method": "nvmf_create_transport", 00:03:57.120 "params": { 00:03:57.120 "trtype": "TCP", 00:03:57.121 "max_queue_depth": 128, 00:03:57.121 "max_io_qpairs_per_ctrlr": 127, 00:03:57.121 "in_capsule_data_size": 4096, 00:03:57.121 "max_io_size": 131072, 00:03:57.121 "io_unit_size": 131072, 00:03:57.121 "max_aq_depth": 128, 00:03:57.121 "num_shared_buffers": 511, 00:03:57.121 "buf_cache_size": 4294967295, 00:03:57.121 "dif_insert_or_strip": false, 00:03:57.121 "zcopy": false, 00:03:57.121 "c2h_success": true, 00:03:57.121 "sock_priority": 0, 00:03:57.121 "abort_timeout_sec": 1, 00:03:57.121 "ack_timeout": 0, 00:03:57.121 "data_wr_pool_size": 0 00:03:57.121 } 00:03:57.121 } 00:03:57.121 ] 00:03:57.121 }, 00:03:57.121 { 00:03:57.121 "subsystem": "iscsi", 00:03:57.121 "config": [ 00:03:57.121 { 00:03:57.121 "method": "iscsi_set_options", 00:03:57.121 "params": { 00:03:57.121 "node_base": "iqn.2016-06.io.spdk", 00:03:57.121 "max_sessions": 128, 00:03:57.121 "max_connections_per_session": 2, 00:03:57.121 "max_queue_depth": 64, 00:03:57.121 "default_time2wait": 2, 00:03:57.121 "default_time2retain": 20, 00:03:57.121 "first_burst_length": 8192, 00:03:57.121 "immediate_data": true, 00:03:57.121 "allow_duplicated_isid": false, 00:03:57.121 "error_recovery_level": 0, 00:03:57.121 "nop_timeout": 60, 00:03:57.121 "nop_in_interval": 30, 00:03:57.121 "disable_chap": false, 00:03:57.121 "require_chap": false, 00:03:57.121 "mutual_chap": false, 00:03:57.121 "chap_group": 0, 00:03:57.121 "max_large_datain_per_connection": 64, 00:03:57.121 "max_r2t_per_connection": 4, 00:03:57.121 "pdu_pool_size": 36864, 00:03:57.121 "immediate_data_pool_size": 16384, 00:03:57.121 "data_out_pool_size": 2048 00:03:57.121 } 00:03:57.121 } 00:03:57.121 ] 00:03:57.121 } 00:03:57.121 ] 00:03:57.121 } 00:03:57.121 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:57.121 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 660470 00:03:57.121 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 660470 ']' 00:03:57.121 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 660470 00:03:57.121 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:57.121 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:57.121 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 660470 00:03:57.121 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:57.121 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:57.121 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 660470' 00:03:57.121 killing process with pid 660470 00:03:57.121 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 660470 00:03:57.121 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 660470 00:03:57.697 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=660612 00:03:57.697 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:57.697 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:02.999 15:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 660612 00:04:02.999 15:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 660612 ']' 00:04:02.999 15:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 660612 00:04:02.999 15:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:02.999 15:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:02.999 15:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 660612 00:04:02.999 15:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:02.999 15:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:02.999 15:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 660612' 00:04:02.999 killing process with pid 660612 00:04:02.999 15:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 660612 00:04:02.999 15:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 660612 00:04:02.999 15:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:02.999 15:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:02.999 00:04:02.999 real 0m6.525s 00:04:02.999 user 0m6.149s 00:04:02.999 sys 0m0.658s 00:04:02.999 15:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.999 15:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:02.999 ************************************ 00:04:02.999 END TEST skip_rpc_with_json 00:04:02.999 ************************************ 00:04:02.999 15:56:49 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:03.259 15:56:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:03.259 15:56:49 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.259 15:56:49 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.259 15:56:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.259 ************************************ 00:04:03.259 START TEST skip_rpc_with_delay 00:04:03.259 ************************************ 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:03.259 [2024-07-15 15:56:49.084510] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:03.259 [2024-07-15 15:56:49.084628] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:03.259 00:04:03.259 real 0m0.070s 00:04:03.259 user 0m0.040s 00:04:03.259 sys 0m0.029s 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.259 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:03.259 ************************************ 00:04:03.259 END TEST skip_rpc_with_delay 00:04:03.259 ************************************ 00:04:03.259 15:56:49 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:03.259 15:56:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:03.259 15:56:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:03.259 15:56:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:03.259 15:56:49 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.259 15:56:49 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.259 15:56:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.259 ************************************ 00:04:03.259 START TEST exit_on_failed_rpc_init 00:04:03.259 ************************************ 00:04:03.259 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:03.259 15:56:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=661332 00:04:03.259 15:56:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:03.259 15:56:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 661332 00:04:03.259 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 661332 ']' 00:04:03.259 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.259 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:03.259 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.259 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:03.259 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:03.259 [2024-07-15 15:56:49.203445] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:03.259 [2024-07-15 15:56:49.203549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661332 ] 00:04:03.259 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.519 [2024-07-15 15:56:49.261400] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.519 [2024-07-15 15:56:49.371658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.778 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:03.778 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:03.778 15:56:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.778 15:56:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.778 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:03.778 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.778 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.778 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:03.778 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.778 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:03.778 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.778 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:03.778 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.778 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:03.778 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.778 [2024-07-15 15:56:49.658498] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:03.778 [2024-07-15 15:56:49.658603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661337 ] 00:04:03.778 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.778 [2024-07-15 15:56:49.717151] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.038 [2024-07-15 15:56:49.825802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:04.038 [2024-07-15 15:56:49.825934] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:04.038 [2024-07-15 15:56:49.825963] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:04.038 [2024-07-15 15:56:49.825992] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 661332 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 661332 ']' 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 661332 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 661332 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 661332' 00:04:04.038 killing process with pid 661332 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 661332 00:04:04.038 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 661332 00:04:04.606 00:04:04.606 real 0m1.247s 00:04:04.606 user 0m1.417s 00:04:04.606 sys 0m0.420s 00:04:04.606 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.606 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.606 ************************************ 00:04:04.606 END TEST exit_on_failed_rpc_init 00:04:04.606 ************************************ 00:04:04.606 15:56:50 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:04.606 15:56:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:04.606 00:04:04.607 real 0m13.567s 00:04:04.607 user 0m12.880s 00:04:04.607 sys 0m1.575s 00:04:04.607 15:56:50 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.607 15:56:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.607 ************************************ 00:04:04.607 END TEST skip_rpc 00:04:04.607 ************************************ 00:04:04.607 15:56:50 -- common/autotest_common.sh@1142 -- # return 0 00:04:04.607 15:56:50 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:04.607 15:56:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.607 15:56:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.607 15:56:50 -- common/autotest_common.sh@10 -- # set +x 00:04:04.607 ************************************ 00:04:04.607 START TEST rpc_client 00:04:04.607 ************************************ 00:04:04.607 15:56:50 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:04.607 * Looking for test storage... 00:04:04.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:04.607 15:56:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:04.607 OK 00:04:04.607 15:56:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:04.607 00:04:04.607 real 0m0.071s 00:04:04.607 user 0m0.034s 00:04:04.607 sys 0m0.042s 00:04:04.607 15:56:50 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.607 15:56:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:04.607 ************************************ 00:04:04.607 END TEST rpc_client 00:04:04.607 ************************************ 00:04:04.607 15:56:50 -- common/autotest_common.sh@1142 -- # return 0 00:04:04.607 15:56:50 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:04.607 15:56:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.607 15:56:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.607 15:56:50 -- common/autotest_common.sh@10 -- # set +x 00:04:04.607 ************************************ 00:04:04.607 START TEST json_config 00:04:04.607 ************************************ 00:04:04.607 15:56:50 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:04.865 15:56:50 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:04.865 15:56:50 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:04.865 15:56:50 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.865 15:56:50 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.865 15:56:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.865 15:56:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.865 15:56:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.865 15:56:50 json_config -- paths/export.sh@5 -- # export PATH 00:04:04.865 15:56:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.865 15:56:50 json_config -- nvmf/common.sh@47 -- # : 0 00:04:04.866 15:56:50 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:04.866 15:56:50 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:04.866 15:56:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:04.866 15:56:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:04.866 15:56:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:04.866 15:56:50 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:04.866 15:56:50 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:04.866 15:56:50 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:04.866 INFO: JSON configuration test init 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:04.866 15:56:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:04.866 15:56:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:04.866 15:56:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:04.866 15:56:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.866 15:56:50 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:04.866 15:56:50 json_config -- json_config/common.sh@9 -- # local app=target 00:04:04.866 15:56:50 json_config -- json_config/common.sh@10 -- # shift 00:04:04.866 15:56:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:04.866 15:56:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:04.866 15:56:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:04.866 15:56:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.866 15:56:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.866 15:56:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=661579 00:04:04.866 15:56:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:04.866 15:56:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:04.866 Waiting for target to run... 00:04:04.866 15:56:50 json_config -- json_config/common.sh@25 -- # waitforlisten 661579 /var/tmp/spdk_tgt.sock 00:04:04.866 15:56:50 json_config -- common/autotest_common.sh@829 -- # '[' -z 661579 ']' 00:04:04.866 15:56:50 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:04.866 15:56:50 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:04.866 15:56:50 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:04.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:04.866 15:56:50 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:04.866 15:56:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.866 [2024-07-15 15:56:50.711314] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:04.866 [2024-07-15 15:56:50.711417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661579 ] 00:04:04.866 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.125 [2024-07-15 15:56:51.053219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.384 [2024-07-15 15:56:51.131167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.951 15:56:51 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:05.951 15:56:51 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:05.951 15:56:51 json_config -- json_config/common.sh@26 -- # echo '' 00:04:05.951 00:04:05.951 15:56:51 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:05.951 15:56:51 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:05.951 15:56:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:05.951 15:56:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.951 15:56:51 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:05.951 15:56:51 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:05.951 15:56:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:05.951 15:56:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.951 15:56:51 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:05.951 15:56:51 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:05.951 15:56:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:09.236 15:56:54 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:09.236 15:56:54 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:09.236 15:56:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:09.236 15:56:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.236 15:56:54 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:09.236 15:56:54 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:09.236 15:56:54 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:09.236 15:56:54 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:09.236 15:56:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:09.236 15:56:54 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:09.236 15:56:55 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:09.236 15:56:55 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:09.236 15:56:55 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:09.236 15:56:55 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:09.236 15:56:55 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:09.236 15:56:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.236 15:56:55 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:09.236 15:56:55 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:09.236 15:56:55 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:09.236 15:56:55 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:09.236 15:56:55 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:09.236 15:56:55 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:09.236 15:56:55 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:09.236 15:56:55 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:09.236 15:56:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.236 15:56:55 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:09.236 15:56:55 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:09.236 15:56:55 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:09.236 15:56:55 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:09.236 15:56:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:09.494 MallocForNvmf0 00:04:09.494 15:56:55 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:09.494 15:56:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:09.751 MallocForNvmf1 00:04:09.751 15:56:55 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:09.751 15:56:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:10.009 [2024-07-15 15:56:55.818268] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:10.009 15:56:55 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:10.009 15:56:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:10.267 15:56:56 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:10.267 15:56:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:10.525 15:56:56 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:10.526 15:56:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:10.784 15:56:56 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:10.784 15:56:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:10.784 [2024-07-15 15:56:56.781372] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:11.042 15:56:56 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:11.042 15:56:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:11.042 15:56:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.042 15:56:56 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:11.042 15:56:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:11.042 15:56:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.042 15:56:56 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:11.042 15:56:56 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:11.042 15:56:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:11.300 MallocBdevForConfigChangeCheck 00:04:11.300 15:56:57 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:11.300 15:56:57 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:11.300 15:56:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.300 15:56:57 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:11.300 15:56:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:11.558 15:56:57 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:11.558 INFO: shutting down applications... 00:04:11.558 15:56:57 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:11.558 15:56:57 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:11.558 15:56:57 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:11.558 15:56:57 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:13.462 Calling clear_iscsi_subsystem 00:04:13.462 Calling clear_nvmf_subsystem 00:04:13.462 Calling clear_nbd_subsystem 00:04:13.462 Calling clear_ublk_subsystem 00:04:13.462 Calling clear_vhost_blk_subsystem 00:04:13.462 Calling clear_vhost_scsi_subsystem 00:04:13.462 Calling clear_bdev_subsystem 00:04:13.462 15:56:59 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:13.462 15:56:59 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:13.462 15:56:59 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:13.462 15:56:59 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:13.462 15:56:59 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:13.462 15:56:59 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:13.462 15:56:59 json_config -- json_config/json_config.sh@345 -- # break 00:04:13.462 15:56:59 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:13.462 15:56:59 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:13.462 15:56:59 json_config -- json_config/common.sh@31 -- # local app=target 00:04:13.462 15:56:59 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:13.778 15:56:59 json_config -- json_config/common.sh@35 -- # [[ -n 661579 ]] 00:04:13.778 15:56:59 json_config -- json_config/common.sh@38 -- # kill -SIGINT 661579 00:04:13.778 15:56:59 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:13.778 15:56:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:13.778 15:56:59 json_config -- json_config/common.sh@41 -- # kill -0 661579 00:04:13.778 15:56:59 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:14.036 15:56:59 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:14.036 15:56:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.036 15:56:59 json_config -- json_config/common.sh@41 -- # kill -0 661579 00:04:14.036 15:56:59 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:14.036 15:56:59 json_config -- json_config/common.sh@43 -- # break 00:04:14.036 15:56:59 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:14.036 15:56:59 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:14.036 SPDK target shutdown done 00:04:14.036 15:56:59 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:14.036 INFO: relaunching applications... 00:04:14.036 15:56:59 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.036 15:56:59 json_config -- json_config/common.sh@9 -- # local app=target 00:04:14.036 15:56:59 json_config -- json_config/common.sh@10 -- # shift 00:04:14.036 15:56:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:14.036 15:56:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:14.036 15:56:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:14.036 15:56:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.036 15:56:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.036 15:56:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=662892 00:04:14.036 15:56:59 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.036 15:56:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:14.036 Waiting for target to run... 00:04:14.036 15:56:59 json_config -- json_config/common.sh@25 -- # waitforlisten 662892 /var/tmp/spdk_tgt.sock 00:04:14.036 15:56:59 json_config -- common/autotest_common.sh@829 -- # '[' -z 662892 ']' 00:04:14.036 15:56:59 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:14.036 15:56:59 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:14.036 15:56:59 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:14.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:14.036 15:56:59 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:14.036 15:56:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.036 [2024-07-15 15:57:00.028683] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:14.036 [2024-07-15 15:57:00.028790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662892 ] 00:04:14.293 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.552 [2024-07-15 15:57:00.548931] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.810 [2024-07-15 15:57:00.647327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.098 [2024-07-15 15:57:03.687826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:18.098 [2024-07-15 15:57:03.720267] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:18.664 15:57:04 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:18.664 15:57:04 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:18.664 15:57:04 json_config -- json_config/common.sh@26 -- # echo '' 00:04:18.664 00:04:18.664 15:57:04 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:18.664 15:57:04 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:18.664 INFO: Checking if target configuration is the same... 00:04:18.664 15:57:04 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.664 15:57:04 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:18.664 15:57:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.664 + '[' 2 -ne 2 ']' 00:04:18.664 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:18.664 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:18.664 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:18.664 +++ basename /dev/fd/62 00:04:18.664 ++ mktemp /tmp/62.XXX 00:04:18.664 + tmp_file_1=/tmp/62.JZ0 00:04:18.664 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.664 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:18.664 + tmp_file_2=/tmp/spdk_tgt_config.json.bf0 00:04:18.664 + ret=0 00:04:18.664 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:18.922 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:18.922 + diff -u /tmp/62.JZ0 /tmp/spdk_tgt_config.json.bf0 00:04:18.922 + echo 'INFO: JSON config files are the same' 00:04:18.922 INFO: JSON config files are the same 00:04:18.922 + rm /tmp/62.JZ0 /tmp/spdk_tgt_config.json.bf0 00:04:18.922 + exit 0 00:04:18.922 15:57:04 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:18.922 15:57:04 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:18.922 INFO: changing configuration and checking if this can be detected... 00:04:18.922 15:57:04 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:18.922 15:57:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:19.180 15:57:05 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.180 15:57:05 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:19.180 15:57:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.180 + '[' 2 -ne 2 ']' 00:04:19.180 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:19.180 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:19.180 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:19.180 +++ basename /dev/fd/62 00:04:19.180 ++ mktemp /tmp/62.XXX 00:04:19.180 + tmp_file_1=/tmp/62.NQM 00:04:19.180 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.180 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:19.180 + tmp_file_2=/tmp/spdk_tgt_config.json.CwV 00:04:19.180 + ret=0 00:04:19.180 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:19.750 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:19.750 + diff -u /tmp/62.NQM /tmp/spdk_tgt_config.json.CwV 00:04:19.750 + ret=1 00:04:19.750 + echo '=== Start of file: /tmp/62.NQM ===' 00:04:19.750 + cat /tmp/62.NQM 00:04:19.750 + echo '=== End of file: /tmp/62.NQM ===' 00:04:19.750 + echo '' 00:04:19.750 + echo '=== Start of file: /tmp/spdk_tgt_config.json.CwV ===' 00:04:19.750 + cat /tmp/spdk_tgt_config.json.CwV 00:04:19.750 + echo '=== End of file: /tmp/spdk_tgt_config.json.CwV ===' 00:04:19.750 + echo '' 00:04:19.750 + rm /tmp/62.NQM /tmp/spdk_tgt_config.json.CwV 00:04:19.750 + exit 1 00:04:19.750 15:57:05 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:19.750 INFO: configuration change detected. 00:04:19.750 15:57:05 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:19.750 15:57:05 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:19.750 15:57:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:19.750 15:57:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.750 15:57:05 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:19.750 15:57:05 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:19.750 15:57:05 json_config -- json_config/json_config.sh@317 -- # [[ -n 662892 ]] 00:04:19.750 15:57:05 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:19.750 15:57:05 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:19.750 15:57:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:19.750 15:57:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.750 15:57:05 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:19.750 15:57:05 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:19.750 15:57:05 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:19.750 15:57:05 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:19.750 15:57:05 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:19.750 15:57:05 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:19.750 15:57:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:19.750 15:57:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.750 15:57:05 json_config -- json_config/json_config.sh@323 -- # killprocess 662892 00:04:19.750 15:57:05 json_config -- common/autotest_common.sh@948 -- # '[' -z 662892 ']' 00:04:19.750 15:57:05 json_config -- common/autotest_common.sh@952 -- # kill -0 662892 00:04:19.750 15:57:05 json_config -- common/autotest_common.sh@953 -- # uname 00:04:19.750 15:57:05 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:19.750 15:57:05 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 662892 00:04:19.750 15:57:05 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:19.750 15:57:05 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:19.750 15:57:05 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 662892' 00:04:19.750 killing process with pid 662892 00:04:19.750 15:57:05 json_config -- common/autotest_common.sh@967 -- # kill 662892 00:04:19.750 15:57:05 json_config -- common/autotest_common.sh@972 -- # wait 662892 00:04:21.690 15:57:07 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:21.690 15:57:07 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:21.690 15:57:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:21.690 15:57:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.690 15:57:07 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:21.690 15:57:07 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:21.690 INFO: Success 00:04:21.690 00:04:21.690 real 0m16.667s 00:04:21.690 user 0m18.589s 00:04:21.690 sys 0m2.046s 00:04:21.690 15:57:07 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.690 15:57:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.690 ************************************ 00:04:21.690 END TEST json_config 00:04:21.690 ************************************ 00:04:21.690 15:57:07 -- common/autotest_common.sh@1142 -- # return 0 00:04:21.690 15:57:07 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:21.690 15:57:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.690 15:57:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.690 15:57:07 -- common/autotest_common.sh@10 -- # set +x 00:04:21.690 ************************************ 00:04:21.690 START TEST json_config_extra_key 00:04:21.690 ************************************ 00:04:21.690 15:57:07 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:21.690 15:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:21.690 15:57:07 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.690 15:57:07 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.690 15:57:07 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.690 15:57:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.690 15:57:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.690 15:57:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.690 15:57:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:21.690 15:57:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:21.690 15:57:07 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:21.690 15:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:21.690 15:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:21.690 15:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:21.690 15:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:21.690 15:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:21.690 15:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:21.690 15:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:21.690 15:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:21.690 15:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:21.690 15:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:21.690 15:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:21.690 INFO: launching applications... 00:04:21.690 15:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:21.690 15:57:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:21.690 15:57:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:21.690 15:57:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:21.690 15:57:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:21.690 15:57:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:21.690 15:57:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.690 15:57:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.690 15:57:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=663927 00:04:21.690 15:57:07 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:21.690 15:57:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:21.690 Waiting for target to run... 00:04:21.690 15:57:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 663927 /var/tmp/spdk_tgt.sock 00:04:21.690 15:57:07 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 663927 ']' 00:04:21.690 15:57:07 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:21.690 15:57:07 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:21.690 15:57:07 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:21.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:21.690 15:57:07 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:21.690 15:57:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:21.690 [2024-07-15 15:57:07.424149] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:21.690 [2024-07-15 15:57:07.424264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663927 ] 00:04:21.690 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.949 [2024-07-15 15:57:07.765874] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.949 [2024-07-15 15:57:07.844049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.514 15:57:08 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:22.514 15:57:08 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:22.514 15:57:08 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:22.514 00:04:22.514 15:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:22.514 INFO: shutting down applications... 00:04:22.514 15:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:22.514 15:57:08 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:22.514 15:57:08 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:22.514 15:57:08 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 663927 ]] 00:04:22.514 15:57:08 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 663927 00:04:22.514 15:57:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:22.514 15:57:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:22.514 15:57:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 663927 00:04:22.514 15:57:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:23.079 15:57:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:23.079 15:57:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.079 15:57:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 663927 00:04:23.079 15:57:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:23.079 15:57:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:23.079 15:57:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:23.079 15:57:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:23.079 SPDK target shutdown done 00:04:23.079 15:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:23.079 Success 00:04:23.079 00:04:23.079 real 0m1.569s 00:04:23.079 user 0m1.551s 00:04:23.079 sys 0m0.441s 00:04:23.079 15:57:08 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.079 15:57:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:23.079 ************************************ 00:04:23.079 END TEST json_config_extra_key 00:04:23.079 ************************************ 00:04:23.079 15:57:08 -- common/autotest_common.sh@1142 -- # return 0 00:04:23.079 15:57:08 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:23.079 15:57:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.079 15:57:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.079 15:57:08 -- common/autotest_common.sh@10 -- # set +x 00:04:23.079 ************************************ 00:04:23.079 START TEST alias_rpc 00:04:23.079 ************************************ 00:04:23.079 15:57:08 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:23.079 * Looking for test storage... 00:04:23.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:23.079 15:57:08 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:23.079 15:57:08 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=664693 00:04:23.079 15:57:08 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.079 15:57:08 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 664693 00:04:23.079 15:57:08 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 664693 ']' 00:04:23.079 15:57:08 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.079 15:57:08 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:23.079 15:57:08 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.079 15:57:08 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:23.079 15:57:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.079 [2024-07-15 15:57:09.040056] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:23.079 [2024-07-15 15:57:09.040139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664693 ] 00:04:23.079 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.337 [2024-07-15 15:57:09.102180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.337 [2024-07-15 15:57:09.210411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.595 15:57:09 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:23.595 15:57:09 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:23.595 15:57:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:23.855 15:57:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 664693 00:04:23.855 15:57:09 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 664693 ']' 00:04:23.855 15:57:09 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 664693 00:04:23.855 15:57:09 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:23.855 15:57:09 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:23.855 15:57:09 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 664693 00:04:23.855 15:57:09 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:23.855 15:57:09 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:23.855 15:57:09 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 664693' 00:04:23.855 killing process with pid 664693 00:04:23.855 15:57:09 alias_rpc -- common/autotest_common.sh@967 -- # kill 664693 00:04:23.855 15:57:09 alias_rpc -- common/autotest_common.sh@972 -- # wait 664693 00:04:24.424 00:04:24.424 real 0m1.235s 00:04:24.424 user 0m1.324s 00:04:24.424 sys 0m0.405s 00:04:24.424 15:57:10 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.424 15:57:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.424 ************************************ 00:04:24.424 END TEST alias_rpc 00:04:24.424 ************************************ 00:04:24.424 15:57:10 -- common/autotest_common.sh@1142 -- # return 0 00:04:24.424 15:57:10 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:24.424 15:57:10 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:24.424 15:57:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.424 15:57:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.424 15:57:10 -- common/autotest_common.sh@10 -- # set +x 00:04:24.424 ************************************ 00:04:24.424 START TEST spdkcli_tcp 00:04:24.424 ************************************ 00:04:24.424 15:57:10 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:24.424 * Looking for test storage... 00:04:24.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:24.424 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:24.424 15:57:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:24.424 15:57:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:24.424 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:24.424 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:24.424 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:24.424 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:24.424 15:57:10 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:24.424 15:57:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:24.424 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=664930 00:04:24.424 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:24.424 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 664930 00:04:24.424 15:57:10 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 664930 ']' 00:04:24.424 15:57:10 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.424 15:57:10 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.424 15:57:10 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.424 15:57:10 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.424 15:57:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:24.424 [2024-07-15 15:57:10.335048] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:24.424 [2024-07-15 15:57:10.335130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664930 ] 00:04:24.424 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.424 [2024-07-15 15:57:10.392207] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.684 [2024-07-15 15:57:10.498705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.684 [2024-07-15 15:57:10.498708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.942 15:57:10 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:24.942 15:57:10 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:24.942 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=664934 00:04:24.942 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:24.942 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:25.202 [ 00:04:25.202 "bdev_malloc_delete", 00:04:25.202 "bdev_malloc_create", 00:04:25.202 "bdev_null_resize", 00:04:25.202 "bdev_null_delete", 00:04:25.202 "bdev_null_create", 00:04:25.202 "bdev_nvme_cuse_unregister", 00:04:25.202 "bdev_nvme_cuse_register", 00:04:25.202 "bdev_opal_new_user", 00:04:25.202 "bdev_opal_set_lock_state", 00:04:25.202 "bdev_opal_delete", 00:04:25.202 "bdev_opal_get_info", 00:04:25.202 "bdev_opal_create", 00:04:25.202 "bdev_nvme_opal_revert", 00:04:25.202 "bdev_nvme_opal_init", 00:04:25.202 "bdev_nvme_send_cmd", 00:04:25.202 "bdev_nvme_get_path_iostat", 00:04:25.202 "bdev_nvme_get_mdns_discovery_info", 00:04:25.202 "bdev_nvme_stop_mdns_discovery", 00:04:25.202 "bdev_nvme_start_mdns_discovery", 00:04:25.202 "bdev_nvme_set_multipath_policy", 00:04:25.202 "bdev_nvme_set_preferred_path", 00:04:25.202 "bdev_nvme_get_io_paths", 00:04:25.202 "bdev_nvme_remove_error_injection", 00:04:25.202 "bdev_nvme_add_error_injection", 00:04:25.202 "bdev_nvme_get_discovery_info", 00:04:25.202 "bdev_nvme_stop_discovery", 00:04:25.202 "bdev_nvme_start_discovery", 00:04:25.202 "bdev_nvme_get_controller_health_info", 00:04:25.202 "bdev_nvme_disable_controller", 00:04:25.202 "bdev_nvme_enable_controller", 00:04:25.202 "bdev_nvme_reset_controller", 00:04:25.202 "bdev_nvme_get_transport_statistics", 00:04:25.202 "bdev_nvme_apply_firmware", 00:04:25.202 "bdev_nvme_detach_controller", 00:04:25.202 "bdev_nvme_get_controllers", 00:04:25.202 "bdev_nvme_attach_controller", 00:04:25.202 "bdev_nvme_set_hotplug", 00:04:25.202 "bdev_nvme_set_options", 00:04:25.202 "bdev_passthru_delete", 00:04:25.202 "bdev_passthru_create", 00:04:25.202 "bdev_lvol_set_parent_bdev", 00:04:25.202 "bdev_lvol_set_parent", 00:04:25.202 "bdev_lvol_check_shallow_copy", 00:04:25.202 "bdev_lvol_start_shallow_copy", 00:04:25.202 "bdev_lvol_grow_lvstore", 00:04:25.202 "bdev_lvol_get_lvols", 00:04:25.202 "bdev_lvol_get_lvstores", 00:04:25.202 "bdev_lvol_delete", 00:04:25.202 "bdev_lvol_set_read_only", 00:04:25.202 "bdev_lvol_resize", 00:04:25.202 "bdev_lvol_decouple_parent", 00:04:25.202 "bdev_lvol_inflate", 00:04:25.202 "bdev_lvol_rename", 00:04:25.202 "bdev_lvol_clone_bdev", 00:04:25.202 "bdev_lvol_clone", 00:04:25.202 "bdev_lvol_snapshot", 00:04:25.202 "bdev_lvol_create", 00:04:25.202 "bdev_lvol_delete_lvstore", 00:04:25.202 "bdev_lvol_rename_lvstore", 00:04:25.202 "bdev_lvol_create_lvstore", 00:04:25.202 "bdev_raid_set_options", 00:04:25.202 "bdev_raid_remove_base_bdev", 00:04:25.202 "bdev_raid_add_base_bdev", 00:04:25.202 "bdev_raid_delete", 00:04:25.202 "bdev_raid_create", 00:04:25.202 "bdev_raid_get_bdevs", 00:04:25.202 "bdev_error_inject_error", 00:04:25.202 "bdev_error_delete", 00:04:25.202 "bdev_error_create", 00:04:25.202 "bdev_split_delete", 00:04:25.202 "bdev_split_create", 00:04:25.202 "bdev_delay_delete", 00:04:25.202 "bdev_delay_create", 00:04:25.202 "bdev_delay_update_latency", 00:04:25.202 "bdev_zone_block_delete", 00:04:25.202 "bdev_zone_block_create", 00:04:25.202 "blobfs_create", 00:04:25.202 "blobfs_detect", 00:04:25.202 "blobfs_set_cache_size", 00:04:25.202 "bdev_aio_delete", 00:04:25.202 "bdev_aio_rescan", 00:04:25.202 "bdev_aio_create", 00:04:25.202 "bdev_ftl_set_property", 00:04:25.202 "bdev_ftl_get_properties", 00:04:25.202 "bdev_ftl_get_stats", 00:04:25.202 "bdev_ftl_unmap", 00:04:25.202 "bdev_ftl_unload", 00:04:25.202 "bdev_ftl_delete", 00:04:25.202 "bdev_ftl_load", 00:04:25.202 "bdev_ftl_create", 00:04:25.202 "bdev_virtio_attach_controller", 00:04:25.202 "bdev_virtio_scsi_get_devices", 00:04:25.202 "bdev_virtio_detach_controller", 00:04:25.202 "bdev_virtio_blk_set_hotplug", 00:04:25.202 "bdev_iscsi_delete", 00:04:25.202 "bdev_iscsi_create", 00:04:25.202 "bdev_iscsi_set_options", 00:04:25.202 "accel_error_inject_error", 00:04:25.202 "ioat_scan_accel_module", 00:04:25.202 "dsa_scan_accel_module", 00:04:25.202 "iaa_scan_accel_module", 00:04:25.202 "vfu_virtio_create_scsi_endpoint", 00:04:25.202 "vfu_virtio_scsi_remove_target", 00:04:25.202 "vfu_virtio_scsi_add_target", 00:04:25.202 "vfu_virtio_create_blk_endpoint", 00:04:25.202 "vfu_virtio_delete_endpoint", 00:04:25.202 "keyring_file_remove_key", 00:04:25.202 "keyring_file_add_key", 00:04:25.202 "keyring_linux_set_options", 00:04:25.202 "iscsi_get_histogram", 00:04:25.202 "iscsi_enable_histogram", 00:04:25.202 "iscsi_set_options", 00:04:25.202 "iscsi_get_auth_groups", 00:04:25.202 "iscsi_auth_group_remove_secret", 00:04:25.202 "iscsi_auth_group_add_secret", 00:04:25.202 "iscsi_delete_auth_group", 00:04:25.202 "iscsi_create_auth_group", 00:04:25.202 "iscsi_set_discovery_auth", 00:04:25.203 "iscsi_get_options", 00:04:25.203 "iscsi_target_node_request_logout", 00:04:25.203 "iscsi_target_node_set_redirect", 00:04:25.203 "iscsi_target_node_set_auth", 00:04:25.203 "iscsi_target_node_add_lun", 00:04:25.203 "iscsi_get_stats", 00:04:25.203 "iscsi_get_connections", 00:04:25.203 "iscsi_portal_group_set_auth", 00:04:25.203 "iscsi_start_portal_group", 00:04:25.203 "iscsi_delete_portal_group", 00:04:25.203 "iscsi_create_portal_group", 00:04:25.203 "iscsi_get_portal_groups", 00:04:25.203 "iscsi_delete_target_node", 00:04:25.203 "iscsi_target_node_remove_pg_ig_maps", 00:04:25.203 "iscsi_target_node_add_pg_ig_maps", 00:04:25.203 "iscsi_create_target_node", 00:04:25.203 "iscsi_get_target_nodes", 00:04:25.203 "iscsi_delete_initiator_group", 00:04:25.203 "iscsi_initiator_group_remove_initiators", 00:04:25.203 "iscsi_initiator_group_add_initiators", 00:04:25.203 "iscsi_create_initiator_group", 00:04:25.203 "iscsi_get_initiator_groups", 00:04:25.203 "nvmf_set_crdt", 00:04:25.203 "nvmf_set_config", 00:04:25.203 "nvmf_set_max_subsystems", 00:04:25.203 "nvmf_stop_mdns_prr", 00:04:25.203 "nvmf_publish_mdns_prr", 00:04:25.203 "nvmf_subsystem_get_listeners", 00:04:25.203 "nvmf_subsystem_get_qpairs", 00:04:25.203 "nvmf_subsystem_get_controllers", 00:04:25.203 "nvmf_get_stats", 00:04:25.203 "nvmf_get_transports", 00:04:25.203 "nvmf_create_transport", 00:04:25.203 "nvmf_get_targets", 00:04:25.203 "nvmf_delete_target", 00:04:25.203 "nvmf_create_target", 00:04:25.203 "nvmf_subsystem_allow_any_host", 00:04:25.203 "nvmf_subsystem_remove_host", 00:04:25.203 "nvmf_subsystem_add_host", 00:04:25.203 "nvmf_ns_remove_host", 00:04:25.203 "nvmf_ns_add_host", 00:04:25.203 "nvmf_subsystem_remove_ns", 00:04:25.203 "nvmf_subsystem_add_ns", 00:04:25.203 "nvmf_subsystem_listener_set_ana_state", 00:04:25.203 "nvmf_discovery_get_referrals", 00:04:25.203 "nvmf_discovery_remove_referral", 00:04:25.203 "nvmf_discovery_add_referral", 00:04:25.203 "nvmf_subsystem_remove_listener", 00:04:25.203 "nvmf_subsystem_add_listener", 00:04:25.203 "nvmf_delete_subsystem", 00:04:25.203 "nvmf_create_subsystem", 00:04:25.203 "nvmf_get_subsystems", 00:04:25.203 "env_dpdk_get_mem_stats", 00:04:25.203 "nbd_get_disks", 00:04:25.203 "nbd_stop_disk", 00:04:25.203 "nbd_start_disk", 00:04:25.203 "ublk_recover_disk", 00:04:25.203 "ublk_get_disks", 00:04:25.203 "ublk_stop_disk", 00:04:25.203 "ublk_start_disk", 00:04:25.203 "ublk_destroy_target", 00:04:25.203 "ublk_create_target", 00:04:25.203 "virtio_blk_create_transport", 00:04:25.203 "virtio_blk_get_transports", 00:04:25.203 "vhost_controller_set_coalescing", 00:04:25.203 "vhost_get_controllers", 00:04:25.203 "vhost_delete_controller", 00:04:25.203 "vhost_create_blk_controller", 00:04:25.203 "vhost_scsi_controller_remove_target", 00:04:25.203 "vhost_scsi_controller_add_target", 00:04:25.203 "vhost_start_scsi_controller", 00:04:25.203 "vhost_create_scsi_controller", 00:04:25.203 "thread_set_cpumask", 00:04:25.203 "framework_get_governor", 00:04:25.203 "framework_get_scheduler", 00:04:25.203 "framework_set_scheduler", 00:04:25.203 "framework_get_reactors", 00:04:25.203 "thread_get_io_channels", 00:04:25.203 "thread_get_pollers", 00:04:25.203 "thread_get_stats", 00:04:25.203 "framework_monitor_context_switch", 00:04:25.203 "spdk_kill_instance", 00:04:25.203 "log_enable_timestamps", 00:04:25.203 "log_get_flags", 00:04:25.203 "log_clear_flag", 00:04:25.203 "log_set_flag", 00:04:25.203 "log_get_level", 00:04:25.203 "log_set_level", 00:04:25.203 "log_get_print_level", 00:04:25.203 "log_set_print_level", 00:04:25.203 "framework_enable_cpumask_locks", 00:04:25.203 "framework_disable_cpumask_locks", 00:04:25.203 "framework_wait_init", 00:04:25.203 "framework_start_init", 00:04:25.203 "scsi_get_devices", 00:04:25.203 "bdev_get_histogram", 00:04:25.203 "bdev_enable_histogram", 00:04:25.203 "bdev_set_qos_limit", 00:04:25.203 "bdev_set_qd_sampling_period", 00:04:25.203 "bdev_get_bdevs", 00:04:25.203 "bdev_reset_iostat", 00:04:25.203 "bdev_get_iostat", 00:04:25.203 "bdev_examine", 00:04:25.203 "bdev_wait_for_examine", 00:04:25.203 "bdev_set_options", 00:04:25.203 "notify_get_notifications", 00:04:25.203 "notify_get_types", 00:04:25.203 "accel_get_stats", 00:04:25.203 "accel_set_options", 00:04:25.203 "accel_set_driver", 00:04:25.203 "accel_crypto_key_destroy", 00:04:25.203 "accel_crypto_keys_get", 00:04:25.203 "accel_crypto_key_create", 00:04:25.203 "accel_assign_opc", 00:04:25.203 "accel_get_module_info", 00:04:25.203 "accel_get_opc_assignments", 00:04:25.203 "vmd_rescan", 00:04:25.203 "vmd_remove_device", 00:04:25.203 "vmd_enable", 00:04:25.203 "sock_get_default_impl", 00:04:25.203 "sock_set_default_impl", 00:04:25.203 "sock_impl_set_options", 00:04:25.203 "sock_impl_get_options", 00:04:25.203 "iobuf_get_stats", 00:04:25.203 "iobuf_set_options", 00:04:25.203 "keyring_get_keys", 00:04:25.203 "framework_get_pci_devices", 00:04:25.203 "framework_get_config", 00:04:25.203 "framework_get_subsystems", 00:04:25.203 "vfu_tgt_set_base_path", 00:04:25.203 "trace_get_info", 00:04:25.203 "trace_get_tpoint_group_mask", 00:04:25.203 "trace_disable_tpoint_group", 00:04:25.203 "trace_enable_tpoint_group", 00:04:25.203 "trace_clear_tpoint_mask", 00:04:25.203 "trace_set_tpoint_mask", 00:04:25.203 "spdk_get_version", 00:04:25.203 "rpc_get_methods" 00:04:25.203 ] 00:04:25.203 15:57:11 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:25.203 15:57:11 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:25.203 15:57:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:25.203 15:57:11 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:25.203 15:57:11 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 664930 00:04:25.203 15:57:11 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 664930 ']' 00:04:25.203 15:57:11 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 664930 00:04:25.203 15:57:11 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:25.203 15:57:11 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:25.203 15:57:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 664930 00:04:25.203 15:57:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:25.203 15:57:11 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:25.203 15:57:11 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 664930' 00:04:25.203 killing process with pid 664930 00:04:25.203 15:57:11 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 664930 00:04:25.203 15:57:11 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 664930 00:04:25.771 00:04:25.771 real 0m1.276s 00:04:25.771 user 0m2.250s 00:04:25.771 sys 0m0.447s 00:04:25.771 15:57:11 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.771 15:57:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:25.771 ************************************ 00:04:25.771 END TEST spdkcli_tcp 00:04:25.771 ************************************ 00:04:25.771 15:57:11 -- common/autotest_common.sh@1142 -- # return 0 00:04:25.771 15:57:11 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:25.771 15:57:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.771 15:57:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.771 15:57:11 -- common/autotest_common.sh@10 -- # set +x 00:04:25.771 ************************************ 00:04:25.771 START TEST dpdk_mem_utility 00:04:25.771 ************************************ 00:04:25.771 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:25.772 * Looking for test storage... 00:04:25.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:25.772 15:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:25.772 15:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=665132 00:04:25.772 15:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.772 15:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 665132 00:04:25.772 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 665132 ']' 00:04:25.772 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.772 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:25.772 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.772 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:25.772 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:25.772 [2024-07-15 15:57:11.660163] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:25.772 [2024-07-15 15:57:11.660254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665132 ] 00:04:25.772 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.772 [2024-07-15 15:57:11.716827] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.031 [2024-07-15 15:57:11.822437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.292 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:26.292 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:26.292 15:57:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:26.292 15:57:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:26.292 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.292 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:26.292 { 00:04:26.292 "filename": "/tmp/spdk_mem_dump.txt" 00:04:26.292 } 00:04:26.292 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.292 15:57:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:26.292 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:26.292 1 heaps totaling size 814.000000 MiB 00:04:26.292 size: 814.000000 MiB heap id: 0 00:04:26.292 end heaps---------- 00:04:26.292 8 mempools totaling size 598.116089 MiB 00:04:26.292 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:26.292 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:26.292 size: 84.521057 MiB name: bdev_io_665132 00:04:26.292 size: 51.011292 MiB name: evtpool_665132 00:04:26.292 size: 50.003479 MiB name: msgpool_665132 00:04:26.292 size: 21.763794 MiB name: PDU_Pool 00:04:26.292 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:26.292 size: 0.026123 MiB name: Session_Pool 00:04:26.292 end mempools------- 00:04:26.292 6 memzones totaling size 4.142822 MiB 00:04:26.292 size: 1.000366 MiB name: RG_ring_0_665132 00:04:26.292 size: 1.000366 MiB name: RG_ring_1_665132 00:04:26.292 size: 1.000366 MiB name: RG_ring_4_665132 00:04:26.292 size: 1.000366 MiB name: RG_ring_5_665132 00:04:26.292 size: 0.125366 MiB name: RG_ring_2_665132 00:04:26.292 size: 0.015991 MiB name: RG_ring_3_665132 00:04:26.292 end memzones------- 00:04:26.292 15:57:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:26.292 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:26.292 list of free elements. size: 12.519348 MiB 00:04:26.292 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:26.292 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:26.292 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:26.292 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:26.292 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:26.292 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:26.292 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:26.292 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:26.292 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:26.292 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:26.292 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:26.292 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:26.292 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:26.292 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:26.292 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:26.292 list of standard malloc elements. size: 199.218079 MiB 00:04:26.292 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:26.292 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:26.292 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:26.292 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:26.292 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:26.292 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:26.292 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:26.292 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:26.292 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:26.292 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:26.292 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:26.292 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:26.292 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:26.292 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:26.292 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:26.292 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:26.292 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:26.292 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:26.292 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:26.292 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:26.292 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:26.292 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:26.292 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:26.292 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:26.292 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:26.292 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:26.292 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:26.292 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:26.292 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:26.292 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:26.292 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:26.292 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:26.292 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:26.292 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:26.292 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:26.292 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:26.292 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:26.292 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:26.292 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:26.292 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:26.292 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:26.292 list of memzone associated elements. size: 602.262573 MiB 00:04:26.292 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:26.292 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:26.292 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:26.292 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:26.292 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:26.292 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_665132_0 00:04:26.292 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:26.292 associated memzone info: size: 48.002930 MiB name: MP_evtpool_665132_0 00:04:26.292 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:26.292 associated memzone info: size: 48.002930 MiB name: MP_msgpool_665132_0 00:04:26.292 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:26.292 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:26.293 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:26.293 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:26.293 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:26.293 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_665132 00:04:26.293 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:26.293 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_665132 00:04:26.293 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:26.293 associated memzone info: size: 1.007996 MiB name: MP_evtpool_665132 00:04:26.293 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:26.293 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:26.293 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:26.293 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:26.293 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:26.293 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:26.293 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:26.293 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:26.293 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:26.293 associated memzone info: size: 1.000366 MiB name: RG_ring_0_665132 00:04:26.293 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:26.293 associated memzone info: size: 1.000366 MiB name: RG_ring_1_665132 00:04:26.293 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:26.293 associated memzone info: size: 1.000366 MiB name: RG_ring_4_665132 00:04:26.293 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:26.293 associated memzone info: size: 1.000366 MiB name: RG_ring_5_665132 00:04:26.293 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:26.293 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_665132 00:04:26.293 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:26.293 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:26.293 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:26.293 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:26.293 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:26.293 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:26.293 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:26.293 associated memzone info: size: 0.125366 MiB name: RG_ring_2_665132 00:04:26.293 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:26.293 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:26.293 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:26.293 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:26.293 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:26.293 associated memzone info: size: 0.015991 MiB name: RG_ring_3_665132 00:04:26.293 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:26.293 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:26.293 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:26.293 associated memzone info: size: 0.000183 MiB name: MP_msgpool_665132 00:04:26.293 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:26.293 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_665132 00:04:26.293 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:26.293 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:26.293 15:57:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:26.293 15:57:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 665132 00:04:26.293 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 665132 ']' 00:04:26.293 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 665132 00:04:26.293 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:26.293 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:26.293 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 665132 00:04:26.293 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:26.293 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:26.293 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 665132' 00:04:26.293 killing process with pid 665132 00:04:26.293 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 665132 00:04:26.293 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 665132 00:04:26.861 00:04:26.861 real 0m1.091s 00:04:26.861 user 0m1.033s 00:04:26.861 sys 0m0.408s 00:04:26.861 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.861 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:26.861 ************************************ 00:04:26.861 END TEST dpdk_mem_utility 00:04:26.861 ************************************ 00:04:26.861 15:57:12 -- common/autotest_common.sh@1142 -- # return 0 00:04:26.861 15:57:12 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:26.861 15:57:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.861 15:57:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.861 15:57:12 -- common/autotest_common.sh@10 -- # set +x 00:04:26.861 ************************************ 00:04:26.861 START TEST event 00:04:26.861 ************************************ 00:04:26.861 15:57:12 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:26.861 * Looking for test storage... 00:04:26.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:26.861 15:57:12 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:26.861 15:57:12 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:26.861 15:57:12 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:26.861 15:57:12 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:26.862 15:57:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.862 15:57:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:26.862 ************************************ 00:04:26.862 START TEST event_perf 00:04:26.862 ************************************ 00:04:26.862 15:57:12 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:26.862 Running I/O for 1 seconds...[2024-07-15 15:57:12.789428] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:26.862 [2024-07-15 15:57:12.789487] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665321 ] 00:04:26.862 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.862 [2024-07-15 15:57:12.845676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:27.119 [2024-07-15 15:57:12.948599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.119 [2024-07-15 15:57:12.948701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:27.119 [2024-07-15 15:57:12.948790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:27.119 [2024-07-15 15:57:12.948798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.057 Running I/O for 1 seconds... 00:04:28.057 lcore 0: 233561 00:04:28.057 lcore 1: 233562 00:04:28.057 lcore 2: 233560 00:04:28.057 lcore 3: 233562 00:04:28.057 done. 00:04:28.315 00:04:28.315 real 0m1.283s 00:04:28.315 user 0m4.203s 00:04:28.315 sys 0m0.075s 00:04:28.315 15:57:14 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.316 15:57:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:28.316 ************************************ 00:04:28.316 END TEST event_perf 00:04:28.316 ************************************ 00:04:28.316 15:57:14 event -- common/autotest_common.sh@1142 -- # return 0 00:04:28.316 15:57:14 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:28.316 15:57:14 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:28.316 15:57:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.316 15:57:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.316 ************************************ 00:04:28.316 START TEST event_reactor 00:04:28.316 ************************************ 00:04:28.316 15:57:14 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:28.316 [2024-07-15 15:57:14.122268] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:28.316 [2024-07-15 15:57:14.122335] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665484 ] 00:04:28.316 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.316 [2024-07-15 15:57:14.181322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.316 [2024-07-15 15:57:14.295320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.693 test_start 00:04:29.693 oneshot 00:04:29.693 tick 100 00:04:29.693 tick 100 00:04:29.693 tick 250 00:04:29.693 tick 100 00:04:29.693 tick 100 00:04:29.693 tick 100 00:04:29.693 tick 250 00:04:29.693 tick 500 00:04:29.693 tick 100 00:04:29.693 tick 100 00:04:29.693 tick 250 00:04:29.693 tick 100 00:04:29.693 tick 100 00:04:29.693 test_end 00:04:29.693 00:04:29.693 real 0m1.298s 00:04:29.693 user 0m1.214s 00:04:29.693 sys 0m0.080s 00:04:29.693 15:57:15 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.693 15:57:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:29.693 ************************************ 00:04:29.693 END TEST event_reactor 00:04:29.693 ************************************ 00:04:29.693 15:57:15 event -- common/autotest_common.sh@1142 -- # return 0 00:04:29.693 15:57:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:29.693 15:57:15 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:29.693 15:57:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.693 15:57:15 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.694 ************************************ 00:04:29.694 START TEST event_reactor_perf 00:04:29.694 ************************************ 00:04:29.694 15:57:15 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:29.694 [2024-07-15 15:57:15.465697] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:29.694 [2024-07-15 15:57:15.465768] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665760 ] 00:04:29.694 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.694 [2024-07-15 15:57:15.528063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.694 [2024-07-15 15:57:15.630633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.075 test_start 00:04:31.075 test_end 00:04:31.075 Performance: 445146 events per second 00:04:31.075 00:04:31.075 real 0m1.289s 00:04:31.075 user 0m1.210s 00:04:31.075 sys 0m0.074s 00:04:31.075 15:57:16 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.075 15:57:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:31.075 ************************************ 00:04:31.075 END TEST event_reactor_perf 00:04:31.075 ************************************ 00:04:31.075 15:57:16 event -- common/autotest_common.sh@1142 -- # return 0 00:04:31.075 15:57:16 event -- event/event.sh@49 -- # uname -s 00:04:31.075 15:57:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:31.075 15:57:16 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:31.075 15:57:16 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.075 15:57:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.075 15:57:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.075 ************************************ 00:04:31.075 START TEST event_scheduler 00:04:31.075 ************************************ 00:04:31.075 15:57:16 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:31.075 * Looking for test storage... 00:04:31.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:31.075 15:57:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:31.075 15:57:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=665941 00:04:31.075 15:57:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:31.075 15:57:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.075 15:57:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 665941 00:04:31.075 15:57:16 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 665941 ']' 00:04:31.075 15:57:16 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.075 15:57:16 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.075 15:57:16 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.075 15:57:16 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.075 15:57:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:31.075 [2024-07-15 15:57:16.892728] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:31.075 [2024-07-15 15:57:16.892800] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665941 ] 00:04:31.075 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.075 [2024-07-15 15:57:16.950067] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:31.075 [2024-07-15 15:57:17.059430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.075 [2024-07-15 15:57:17.059494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.075 [2024-07-15 15:57:17.059558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:31.075 [2024-07-15 15:57:17.059562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:31.335 15:57:17 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.335 15:57:17 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:31.335 15:57:17 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:31.335 15:57:17 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.336 15:57:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:31.336 [2024-07-15 15:57:17.108355] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:31.336 [2024-07-15 15:57:17.108380] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:31.336 [2024-07-15 15:57:17.108396] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:31.336 [2024-07-15 15:57:17.108406] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:31.336 [2024-07-15 15:57:17.108416] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:31.336 15:57:17 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.336 15:57:17 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:31.336 15:57:17 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.336 15:57:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:31.336 [2024-07-15 15:57:17.203852] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:31.336 15:57:17 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.336 15:57:17 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:31.336 15:57:17 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.336 15:57:17 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.336 15:57:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:31.336 ************************************ 00:04:31.336 START TEST scheduler_create_thread 00:04:31.336 ************************************ 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.336 2 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.336 3 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.336 4 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.336 5 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.336 6 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.336 7 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.336 8 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.336 9 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.336 10 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.336 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.902 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.902 00:04:31.902 real 0m0.593s 00:04:31.902 user 0m0.010s 00:04:31.902 sys 0m0.004s 00:04:31.902 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.902 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.902 ************************************ 00:04:31.902 END TEST scheduler_create_thread 00:04:31.902 ************************************ 00:04:31.902 15:57:17 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:31.902 15:57:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:31.902 15:57:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 665941 00:04:31.903 15:57:17 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 665941 ']' 00:04:31.903 15:57:17 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 665941 00:04:31.903 15:57:17 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:31.903 15:57:17 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:31.903 15:57:17 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 665941 00:04:31.903 15:57:17 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:31.903 15:57:17 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:31.903 15:57:17 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 665941' 00:04:31.903 killing process with pid 665941 00:04:31.903 15:57:17 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 665941 00:04:31.903 15:57:17 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 665941 00:04:32.470 [2024-07-15 15:57:18.304431] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:32.728 00:04:32.728 real 0m1.768s 00:04:32.728 user 0m2.237s 00:04:32.728 sys 0m0.337s 00:04:32.728 15:57:18 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.728 15:57:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:32.728 ************************************ 00:04:32.728 END TEST event_scheduler 00:04:32.728 ************************************ 00:04:32.728 15:57:18 event -- common/autotest_common.sh@1142 -- # return 0 00:04:32.728 15:57:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:32.728 15:57:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:32.728 15:57:18 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.728 15:57:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.728 15:57:18 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.728 ************************************ 00:04:32.728 START TEST app_repeat 00:04:32.728 ************************************ 00:04:32.728 15:57:18 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:32.728 15:57:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.728 15:57:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.728 15:57:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:32.728 15:57:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.728 15:57:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:32.728 15:57:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:32.728 15:57:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:32.728 15:57:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=666140 00:04:32.728 15:57:18 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:32.728 15:57:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.728 15:57:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 666140' 00:04:32.728 Process app_repeat pid: 666140 00:04:32.728 15:57:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:32.728 15:57:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:32.728 spdk_app_start Round 0 00:04:32.728 15:57:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 666140 /var/tmp/spdk-nbd.sock 00:04:32.728 15:57:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 666140 ']' 00:04:32.728 15:57:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:32.728 15:57:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.728 15:57:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:32.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:32.728 15:57:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.728 15:57:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:32.728 [2024-07-15 15:57:18.641602] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:32.728 [2024-07-15 15:57:18.641666] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666140 ] 00:04:32.728 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.728 [2024-07-15 15:57:18.703135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.986 [2024-07-15 15:57:18.813837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.986 [2024-07-15 15:57:18.813840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.986 15:57:18 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:32.986 15:57:18 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:32.986 15:57:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.243 Malloc0 00:04:33.243 15:57:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.501 Malloc1 00:04:33.501 15:57:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.501 15:57:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.501 15:57:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.501 15:57:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:33.501 15:57:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.501 15:57:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:33.501 15:57:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.501 15:57:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.501 15:57:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.501 15:57:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:33.501 15:57:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.501 15:57:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:33.501 15:57:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:33.501 15:57:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:33.501 15:57:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.501 15:57:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:33.758 /dev/nbd0 00:04:33.758 15:57:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:33.758 15:57:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:33.758 15:57:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:33.758 15:57:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:33.758 15:57:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:33.758 15:57:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:33.758 15:57:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:33.758 15:57:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:33.758 15:57:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:33.758 15:57:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:33.758 15:57:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.758 1+0 records in 00:04:33.758 1+0 records out 00:04:33.758 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000144861 s, 28.3 MB/s 00:04:33.758 15:57:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.758 15:57:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:33.758 15:57:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.758 15:57:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:33.758 15:57:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:33.758 15:57:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.758 15:57:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.758 15:57:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:34.015 /dev/nbd1 00:04:34.016 15:57:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:34.016 15:57:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:34.016 15:57:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:34.016 15:57:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:34.016 15:57:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:34.016 15:57:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:34.016 15:57:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:34.016 15:57:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:34.016 15:57:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:34.016 15:57:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:34.016 15:57:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.016 1+0 records in 00:04:34.016 1+0 records out 00:04:34.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162831 s, 25.2 MB/s 00:04:34.016 15:57:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.016 15:57:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:34.016 15:57:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.016 15:57:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:34.016 15:57:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:34.016 15:57:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.016 15:57:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.016 15:57:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.016 15:57:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.016 15:57:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.272 15:57:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:34.272 { 00:04:34.272 "nbd_device": "/dev/nbd0", 00:04:34.272 "bdev_name": "Malloc0" 00:04:34.272 }, 00:04:34.272 { 00:04:34.272 "nbd_device": "/dev/nbd1", 00:04:34.272 "bdev_name": "Malloc1" 00:04:34.272 } 00:04:34.272 ]' 00:04:34.272 15:57:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:34.272 { 00:04:34.272 "nbd_device": "/dev/nbd0", 00:04:34.272 "bdev_name": "Malloc0" 00:04:34.272 }, 00:04:34.272 { 00:04:34.272 "nbd_device": "/dev/nbd1", 00:04:34.272 "bdev_name": "Malloc1" 00:04:34.272 } 00:04:34.272 ]' 00:04:34.272 15:57:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:34.530 /dev/nbd1' 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:34.530 /dev/nbd1' 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:34.530 256+0 records in 00:04:34.530 256+0 records out 00:04:34.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497406 s, 211 MB/s 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:34.530 256+0 records in 00:04:34.530 256+0 records out 00:04:34.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214754 s, 48.8 MB/s 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:34.530 256+0 records in 00:04:34.530 256+0 records out 00:04:34.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226165 s, 46.4 MB/s 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.530 15:57:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:34.787 15:57:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:34.787 15:57:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:34.787 15:57:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:34.787 15:57:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.787 15:57:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.787 15:57:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:34.787 15:57:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.787 15:57:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.787 15:57:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.787 15:57:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:35.044 15:57:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:35.044 15:57:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:35.044 15:57:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:35.044 15:57:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.044 15:57:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.044 15:57:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:35.044 15:57:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.044 15:57:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.044 15:57:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.044 15:57:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.044 15:57:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.302 15:57:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:35.302 15:57:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:35.302 15:57:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.302 15:57:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:35.302 15:57:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:35.302 15:57:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.302 15:57:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:35.302 15:57:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:35.302 15:57:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:35.302 15:57:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:35.302 15:57:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:35.302 15:57:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:35.302 15:57:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:35.561 15:57:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:35.820 [2024-07-15 15:57:21.702401] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.820 [2024-07-15 15:57:21.803418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.820 [2024-07-15 15:57:21.803418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.078 [2024-07-15 15:57:21.860740] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:36.078 [2024-07-15 15:57:21.860820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:38.616 15:57:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:38.616 15:57:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:38.616 spdk_app_start Round 1 00:04:38.616 15:57:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 666140 /var/tmp/spdk-nbd.sock 00:04:38.616 15:57:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 666140 ']' 00:04:38.616 15:57:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.616 15:57:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.616 15:57:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.616 15:57:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.616 15:57:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:38.873 15:57:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.873 15:57:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:38.873 15:57:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.131 Malloc0 00:04:39.131 15:57:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.389 Malloc1 00:04:39.389 15:57:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.389 15:57:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.389 15:57:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.389 15:57:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:39.389 15:57:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.389 15:57:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:39.389 15:57:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.389 15:57:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.389 15:57:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.389 15:57:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:39.389 15:57:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.389 15:57:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:39.389 15:57:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:39.389 15:57:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:39.389 15:57:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.389 15:57:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:39.647 /dev/nbd0 00:04:39.647 15:57:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:39.647 15:57:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:39.647 15:57:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:39.647 15:57:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:39.647 15:57:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:39.647 15:57:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:39.647 15:57:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:39.647 15:57:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:39.647 15:57:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:39.647 15:57:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:39.647 15:57:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.647 1+0 records in 00:04:39.647 1+0 records out 00:04:39.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000172207 s, 23.8 MB/s 00:04:39.647 15:57:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.647 15:57:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:39.647 15:57:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.647 15:57:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:39.647 15:57:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:39.647 15:57:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.647 15:57:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.648 15:57:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:39.906 /dev/nbd1 00:04:39.906 15:57:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:39.906 15:57:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:39.906 15:57:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:39.906 15:57:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:39.906 15:57:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:39.906 15:57:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:39.906 15:57:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:39.906 15:57:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:39.906 15:57:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:39.906 15:57:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:39.906 15:57:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.906 1+0 records in 00:04:39.906 1+0 records out 00:04:39.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241294 s, 17.0 MB/s 00:04:39.906 15:57:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.906 15:57:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:39.906 15:57:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.906 15:57:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:39.906 15:57:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:39.906 15:57:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.906 15:57:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.906 15:57:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.906 15:57:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.906 15:57:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:40.164 { 00:04:40.164 "nbd_device": "/dev/nbd0", 00:04:40.164 "bdev_name": "Malloc0" 00:04:40.164 }, 00:04:40.164 { 00:04:40.164 "nbd_device": "/dev/nbd1", 00:04:40.164 "bdev_name": "Malloc1" 00:04:40.164 } 00:04:40.164 ]' 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:40.164 { 00:04:40.164 "nbd_device": "/dev/nbd0", 00:04:40.164 "bdev_name": "Malloc0" 00:04:40.164 }, 00:04:40.164 { 00:04:40.164 "nbd_device": "/dev/nbd1", 00:04:40.164 "bdev_name": "Malloc1" 00:04:40.164 } 00:04:40.164 ]' 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:40.164 /dev/nbd1' 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:40.164 /dev/nbd1' 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:40.164 256+0 records in 00:04:40.164 256+0 records out 00:04:40.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508386 s, 206 MB/s 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:40.164 256+0 records in 00:04:40.164 256+0 records out 00:04:40.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198288 s, 52.9 MB/s 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:40.164 256+0 records in 00:04:40.164 256+0 records out 00:04:40.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218998 s, 47.9 MB/s 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.164 15:57:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:40.422 15:57:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:40.422 15:57:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:40.422 15:57:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:40.422 15:57:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.422 15:57:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.422 15:57:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:40.422 15:57:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.422 15:57:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.422 15:57:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.422 15:57:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.991 15:57:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:41.250 15:57:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:41.250 15:57:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:41.250 15:57:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:41.250 15:57:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:41.250 15:57:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:41.250 15:57:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:41.509 15:57:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:41.769 [2024-07-15 15:57:27.512429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.769 [2024-07-15 15:57:27.614926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.769 [2024-07-15 15:57:27.614928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.769 [2024-07-15 15:57:27.667603] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:41.769 [2024-07-15 15:57:27.667661] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:44.306 15:57:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:44.306 15:57:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:44.306 spdk_app_start Round 2 00:04:44.306 15:57:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 666140 /var/tmp/spdk-nbd.sock 00:04:44.306 15:57:30 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 666140 ']' 00:04:44.306 15:57:30 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.306 15:57:30 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.306 15:57:30 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.306 15:57:30 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.306 15:57:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:44.563 15:57:30 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.563 15:57:30 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:44.563 15:57:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.820 Malloc0 00:04:44.820 15:57:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.078 Malloc1 00:04:45.078 15:57:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.078 15:57:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.078 15:57:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.078 15:57:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:45.078 15:57:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.078 15:57:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:45.078 15:57:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.078 15:57:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.078 15:57:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.078 15:57:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:45.078 15:57:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.078 15:57:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:45.078 15:57:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:45.078 15:57:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:45.078 15:57:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.078 15:57:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:45.336 /dev/nbd0 00:04:45.336 15:57:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:45.336 15:57:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:45.336 15:57:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:45.336 15:57:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:45.336 15:57:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:45.336 15:57:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:45.336 15:57:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:45.336 15:57:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:45.336 15:57:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:45.336 15:57:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:45.336 15:57:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.336 1+0 records in 00:04:45.336 1+0 records out 00:04:45.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231376 s, 17.7 MB/s 00:04:45.336 15:57:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.336 15:57:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:45.336 15:57:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.336 15:57:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:45.336 15:57:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:45.336 15:57:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.336 15:57:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.336 15:57:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:45.593 /dev/nbd1 00:04:45.593 15:57:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:45.593 15:57:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:45.593 15:57:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:45.593 15:57:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:45.593 15:57:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:45.593 15:57:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:45.593 15:57:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:45.593 15:57:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:45.593 15:57:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:45.593 15:57:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:45.593 15:57:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.593 1+0 records in 00:04:45.593 1+0 records out 00:04:45.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020758 s, 19.7 MB/s 00:04:45.593 15:57:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.593 15:57:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:45.593 15:57:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.593 15:57:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:45.593 15:57:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:45.593 15:57:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.593 15:57:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.593 15:57:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.593 15:57:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.593 15:57:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.850 15:57:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:45.850 { 00:04:45.850 "nbd_device": "/dev/nbd0", 00:04:45.850 "bdev_name": "Malloc0" 00:04:45.850 }, 00:04:45.850 { 00:04:45.850 "nbd_device": "/dev/nbd1", 00:04:45.850 "bdev_name": "Malloc1" 00:04:45.850 } 00:04:45.850 ]' 00:04:45.850 15:57:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:45.850 { 00:04:45.850 "nbd_device": "/dev/nbd0", 00:04:45.850 "bdev_name": "Malloc0" 00:04:45.850 }, 00:04:45.850 { 00:04:45.850 "nbd_device": "/dev/nbd1", 00:04:45.850 "bdev_name": "Malloc1" 00:04:45.850 } 00:04:45.850 ]' 00:04:45.850 15:57:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.850 15:57:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:45.850 /dev/nbd1' 00:04:45.850 15:57:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:45.850 /dev/nbd1' 00:04:45.850 15:57:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:46.108 256+0 records in 00:04:46.108 256+0 records out 00:04:46.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0039483 s, 266 MB/s 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:46.108 256+0 records in 00:04:46.108 256+0 records out 00:04:46.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212742 s, 49.3 MB/s 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:46.108 256+0 records in 00:04:46.108 256+0 records out 00:04:46.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220533 s, 47.5 MB/s 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.108 15:57:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:46.365 15:57:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:46.365 15:57:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:46.365 15:57:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:46.365 15:57:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.365 15:57:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.366 15:57:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:46.366 15:57:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.366 15:57:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.366 15:57:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.366 15:57:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:46.623 15:57:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:46.623 15:57:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:46.623 15:57:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:46.623 15:57:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.623 15:57:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.623 15:57:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:46.623 15:57:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.623 15:57:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.623 15:57:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.623 15:57:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.623 15:57:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.880 15:57:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:46.880 15:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:46.880 15:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.880 15:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:46.880 15:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:46.880 15:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.880 15:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:46.880 15:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:46.880 15:57:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:46.880 15:57:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:46.880 15:57:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:46.880 15:57:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:46.880 15:57:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:47.139 15:57:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:47.400 [2024-07-15 15:57:33.270808] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.400 [2024-07-15 15:57:33.373473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.400 [2024-07-15 15:57:33.373473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.658 [2024-07-15 15:57:33.431417] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:47.658 [2024-07-15 15:57:33.431499] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.194 15:57:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 666140 /var/tmp/spdk-nbd.sock 00:04:50.194 15:57:36 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 666140 ']' 00:04:50.194 15:57:36 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.194 15:57:36 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.194 15:57:36 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.194 15:57:36 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.194 15:57:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.454 15:57:36 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.454 15:57:36 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:50.454 15:57:36 event.app_repeat -- event/event.sh@39 -- # killprocess 666140 00:04:50.454 15:57:36 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 666140 ']' 00:04:50.454 15:57:36 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 666140 00:04:50.454 15:57:36 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:50.454 15:57:36 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:50.454 15:57:36 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 666140 00:04:50.454 15:57:36 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:50.454 15:57:36 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:50.454 15:57:36 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 666140' 00:04:50.454 killing process with pid 666140 00:04:50.454 15:57:36 event.app_repeat -- common/autotest_common.sh@967 -- # kill 666140 00:04:50.454 15:57:36 event.app_repeat -- common/autotest_common.sh@972 -- # wait 666140 00:04:50.713 spdk_app_start is called in Round 0. 00:04:50.713 Shutdown signal received, stop current app iteration 00:04:50.713 Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 reinitialization... 00:04:50.713 spdk_app_start is called in Round 1. 00:04:50.713 Shutdown signal received, stop current app iteration 00:04:50.713 Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 reinitialization... 00:04:50.713 spdk_app_start is called in Round 2. 00:04:50.713 Shutdown signal received, stop current app iteration 00:04:50.713 Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 reinitialization... 00:04:50.713 spdk_app_start is called in Round 3. 00:04:50.713 Shutdown signal received, stop current app iteration 00:04:50.713 15:57:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:50.713 15:57:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:50.713 00:04:50.713 real 0m17.906s 00:04:50.713 user 0m38.871s 00:04:50.713 sys 0m3.217s 00:04:50.713 15:57:36 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.713 15:57:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.713 ************************************ 00:04:50.713 END TEST app_repeat 00:04:50.713 ************************************ 00:04:50.713 15:57:36 event -- common/autotest_common.sh@1142 -- # return 0 00:04:50.713 15:57:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:50.713 15:57:36 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:50.713 15:57:36 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.713 15:57:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.713 15:57:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.713 ************************************ 00:04:50.713 START TEST cpu_locks 00:04:50.713 ************************************ 00:04:50.713 15:57:36 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:50.713 * Looking for test storage... 00:04:50.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:50.713 15:57:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:50.713 15:57:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:50.713 15:57:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:50.713 15:57:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:50.713 15:57:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.713 15:57:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.713 15:57:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.713 ************************************ 00:04:50.713 START TEST default_locks 00:04:50.713 ************************************ 00:04:50.713 15:57:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:50.713 15:57:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=668582 00:04:50.713 15:57:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.713 15:57:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 668582 00:04:50.713 15:57:36 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 668582 ']' 00:04:50.713 15:57:36 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.713 15:57:36 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.713 15:57:36 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.713 15:57:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.713 15:57:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.713 [2024-07-15 15:57:36.703650] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:50.713 [2024-07-15 15:57:36.703750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668582 ] 00:04:50.973 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.973 [2024-07-15 15:57:36.760592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.973 [2024-07-15 15:57:36.863284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.233 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.233 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:04:51.233 15:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 668582 00:04:51.233 15:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 668582 00:04:51.233 15:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:51.492 lslocks: write error 00:04:51.493 15:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 668582 00:04:51.493 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 668582 ']' 00:04:51.493 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 668582 00:04:51.493 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:04:51.493 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.493 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 668582 00:04:51.493 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.493 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.493 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 668582' 00:04:51.493 killing process with pid 668582 00:04:51.493 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 668582 00:04:51.493 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 668582 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 668582 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 668582 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 668582 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 668582 ']' 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (668582) - No such process 00:04:52.059 ERROR: process (pid: 668582) is no longer running 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:52.059 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.060 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:52.060 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.060 15:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:52.060 15:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:52.060 15:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:52.060 15:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:52.060 00:04:52.060 real 0m1.163s 00:04:52.060 user 0m1.110s 00:04:52.060 sys 0m0.471s 00:04:52.060 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.060 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.060 ************************************ 00:04:52.060 END TEST default_locks 00:04:52.060 ************************************ 00:04:52.060 15:57:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:52.060 15:57:37 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:52.060 15:57:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.060 15:57:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.060 15:57:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.060 ************************************ 00:04:52.060 START TEST default_locks_via_rpc 00:04:52.060 ************************************ 00:04:52.060 15:57:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:04:52.060 15:57:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=668765 00:04:52.060 15:57:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.060 15:57:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 668765 00:04:52.060 15:57:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 668765 ']' 00:04:52.060 15:57:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.060 15:57:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.060 15:57:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.060 15:57:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.060 15:57:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.060 [2024-07-15 15:57:37.915053] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:52.060 [2024-07-15 15:57:37.915131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668765 ] 00:04:52.060 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.060 [2024-07-15 15:57:37.970249] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.318 [2024-07-15 15:57:38.070826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.318 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.318 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:52.318 15:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:52.318 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.318 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.318 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.318 15:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:52.318 15:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:52.318 15:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:52.318 15:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:52.318 15:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:52.318 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.318 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.318 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.318 15:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 668765 00:04:52.318 15:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 668765 00:04:52.318 15:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.885 15:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 668765 00:04:52.885 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 668765 ']' 00:04:52.885 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 668765 00:04:52.885 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:04:52.885 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.885 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 668765 00:04:52.885 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:52.885 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:52.885 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 668765' 00:04:52.885 killing process with pid 668765 00:04:52.885 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 668765 00:04:52.885 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 668765 00:04:53.480 00:04:53.480 real 0m1.385s 00:04:53.480 user 0m1.358s 00:04:53.480 sys 0m0.532s 00:04:53.480 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.480 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.480 ************************************ 00:04:53.480 END TEST default_locks_via_rpc 00:04:53.480 ************************************ 00:04:53.480 15:57:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:53.480 15:57:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:53.480 15:57:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.480 15:57:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.480 15:57:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.480 ************************************ 00:04:53.480 START TEST non_locking_app_on_locked_coremask 00:04:53.480 ************************************ 00:04:53.480 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:04:53.480 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=668926 00:04:53.480 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.480 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 668926 /var/tmp/spdk.sock 00:04:53.480 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 668926 ']' 00:04:53.480 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.480 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.480 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.480 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.480 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.480 [2024-07-15 15:57:39.349981] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:53.480 [2024-07-15 15:57:39.350087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668926 ] 00:04:53.480 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.480 [2024-07-15 15:57:39.409897] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.739 [2024-07-15 15:57:39.517103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.997 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.997 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:53.997 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=668940 00:04:53.997 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:53.997 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 668940 /var/tmp/spdk2.sock 00:04:53.997 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 668940 ']' 00:04:53.997 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:53.997 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.997 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:53.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:53.997 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.997 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.997 [2024-07-15 15:57:39.803184] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:53.997 [2024-07-15 15:57:39.803275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668940 ] 00:04:53.997 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.997 [2024-07-15 15:57:39.885211] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:53.997 [2024-07-15 15:57:39.885238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.254 [2024-07-15 15:57:40.101101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.818 15:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.818 15:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:54.818 15:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 668926 00:04:54.818 15:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 668926 00:04:54.818 15:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:55.398 lslocks: write error 00:04:55.398 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 668926 00:04:55.398 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 668926 ']' 00:04:55.398 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 668926 00:04:55.398 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:55.398 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:55.398 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 668926 00:04:55.398 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:55.398 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:55.398 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 668926' 00:04:55.398 killing process with pid 668926 00:04:55.398 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 668926 00:04:55.398 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 668926 00:04:56.333 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 668940 00:04:56.333 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 668940 ']' 00:04:56.333 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 668940 00:04:56.333 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:56.333 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:56.333 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 668940 00:04:56.333 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:56.333 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:56.333 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 668940' 00:04:56.333 killing process with pid 668940 00:04:56.333 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 668940 00:04:56.333 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 668940 00:04:56.591 00:04:56.591 real 0m3.146s 00:04:56.591 user 0m3.350s 00:04:56.591 sys 0m0.985s 00:04:56.591 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.591 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.591 ************************************ 00:04:56.591 END TEST non_locking_app_on_locked_coremask 00:04:56.591 ************************************ 00:04:56.591 15:57:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:56.591 15:57:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:56.592 15:57:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.592 15:57:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.592 15:57:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.592 ************************************ 00:04:56.592 START TEST locking_app_on_unlocked_coremask 00:04:56.592 ************************************ 00:04:56.592 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:04:56.592 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=669368 00:04:56.592 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:56.592 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 669368 /var/tmp/spdk.sock 00:04:56.592 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 669368 ']' 00:04:56.592 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.592 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.592 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.592 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.592 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.592 [2024-07-15 15:57:42.544813] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:56.592 [2024-07-15 15:57:42.544894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669368 ] 00:04:56.592 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.850 [2024-07-15 15:57:42.601183] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:56.850 [2024-07-15 15:57:42.601222] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.850 [2024-07-15 15:57:42.699223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.108 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.108 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:57.108 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=669375 00:04:57.108 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:57.108 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 669375 /var/tmp/spdk2.sock 00:04:57.108 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 669375 ']' 00:04:57.108 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.108 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.108 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.108 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.108 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.108 [2024-07-15 15:57:42.993564] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:57.108 [2024-07-15 15:57:42.993640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669375 ] 00:04:57.108 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.108 [2024-07-15 15:57:43.075596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.366 [2024-07-15 15:57:43.289230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.931 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.931 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:57.931 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 669375 00:04:57.931 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 669375 00:04:57.931 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.497 lslocks: write error 00:04:58.497 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 669368 00:04:58.497 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 669368 ']' 00:04:58.497 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 669368 00:04:58.497 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:58.497 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.497 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 669368 00:04:58.497 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.497 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.497 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 669368' 00:04:58.497 killing process with pid 669368 00:04:58.497 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 669368 00:04:58.497 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 669368 00:04:59.431 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 669375 00:04:59.431 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 669375 ']' 00:04:59.431 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 669375 00:04:59.431 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:59.431 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:59.431 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 669375 00:04:59.431 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:59.431 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:59.431 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 669375' 00:04:59.431 killing process with pid 669375 00:04:59.431 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 669375 00:04:59.431 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 669375 00:04:59.689 00:04:59.689 real 0m3.145s 00:04:59.689 user 0m3.309s 00:04:59.689 sys 0m0.968s 00:04:59.689 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.689 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.689 ************************************ 00:04:59.689 END TEST locking_app_on_unlocked_coremask 00:04:59.689 ************************************ 00:04:59.689 15:57:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:59.689 15:57:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:59.689 15:57:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.689 15:57:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.689 15:57:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.689 ************************************ 00:04:59.689 START TEST locking_app_on_locked_coremask 00:04:59.689 ************************************ 00:04:59.689 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:04:59.689 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=669797 00:04:59.689 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.689 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 669797 /var/tmp/spdk.sock 00:04:59.689 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 669797 ']' 00:04:59.689 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.689 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.689 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.689 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.689 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.947 [2024-07-15 15:57:45.736685] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:04:59.948 [2024-07-15 15:57:45.736759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669797 ] 00:04:59.948 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.948 [2024-07-15 15:57:45.792814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.948 [2024-07-15 15:57:45.896856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=669807 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 669807 /var/tmp/spdk2.sock 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 669807 /var/tmp/spdk2.sock 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 669807 /var/tmp/spdk2.sock 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 669807 ']' 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.206 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.206 [2024-07-15 15:57:46.184725] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:00.206 [2024-07-15 15:57:46.184801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669807 ] 00:05:00.464 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.464 [2024-07-15 15:57:46.267572] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 669797 has claimed it. 00:05:00.464 [2024-07-15 15:57:46.267642] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:01.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (669807) - No such process 00:05:01.029 ERROR: process (pid: 669807) is no longer running 00:05:01.029 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.029 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:01.029 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:01.029 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:01.029 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:01.029 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:01.029 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 669797 00:05:01.029 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 669797 00:05:01.029 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:01.287 lslocks: write error 00:05:01.287 15:57:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 669797 00:05:01.287 15:57:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 669797 ']' 00:05:01.287 15:57:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 669797 00:05:01.287 15:57:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:01.287 15:57:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.287 15:57:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 669797 00:05:01.287 15:57:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.287 15:57:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.287 15:57:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 669797' 00:05:01.287 killing process with pid 669797 00:05:01.287 15:57:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 669797 00:05:01.287 15:57:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 669797 00:05:01.852 00:05:01.852 real 0m1.995s 00:05:01.852 user 0m2.185s 00:05:01.852 sys 0m0.599s 00:05:01.852 15:57:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.852 15:57:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.852 ************************************ 00:05:01.852 END TEST locking_app_on_locked_coremask 00:05:01.852 ************************************ 00:05:01.852 15:57:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:01.852 15:57:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:01.852 15:57:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.852 15:57:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.852 15:57:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.852 ************************************ 00:05:01.852 START TEST locking_overlapped_coremask 00:05:01.852 ************************************ 00:05:01.852 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:01.852 15:57:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=669980 00:05:01.853 15:57:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:01.853 15:57:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 669980 /var/tmp/spdk.sock 00:05:01.853 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 669980 ']' 00:05:01.853 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.853 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.853 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.853 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.853 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.853 [2024-07-15 15:57:47.779897] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:01.853 [2024-07-15 15:57:47.779985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669980 ] 00:05:01.853 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.853 [2024-07-15 15:57:47.837157] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:02.110 [2024-07-15 15:57:47.945508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.110 [2024-07-15 15:57:47.945571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.110 [2024-07-15 15:57:47.945574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=670105 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 670105 /var/tmp/spdk2.sock 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 670105 /var/tmp/spdk2.sock 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 670105 /var/tmp/spdk2.sock 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 670105 ']' 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.367 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.367 [2024-07-15 15:57:48.230546] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:02.367 [2024-07-15 15:57:48.230636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670105 ] 00:05:02.367 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.367 [2024-07-15 15:57:48.318602] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 669980 has claimed it. 00:05:02.367 [2024-07-15 15:57:48.318673] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:02.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (670105) - No such process 00:05:02.931 ERROR: process (pid: 670105) is no longer running 00:05:02.931 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.931 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:02.931 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:02.931 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:02.931 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:02.931 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:02.931 15:57:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:02.931 15:57:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:02.931 15:57:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:02.931 15:57:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:02.931 15:57:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 669980 00:05:02.931 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 669980 ']' 00:05:02.931 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 669980 00:05:02.932 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:02.932 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:02.932 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 669980 00:05:03.189 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:03.189 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:03.189 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 669980' 00:05:03.189 killing process with pid 669980 00:05:03.189 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 669980 00:05:03.189 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 669980 00:05:03.446 00:05:03.446 real 0m1.635s 00:05:03.446 user 0m4.372s 00:05:03.446 sys 0m0.426s 00:05:03.446 15:57:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.446 15:57:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.446 ************************************ 00:05:03.446 END TEST locking_overlapped_coremask 00:05:03.446 ************************************ 00:05:03.446 15:57:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:03.446 15:57:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:03.446 15:57:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.446 15:57:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.446 15:57:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.446 ************************************ 00:05:03.446 START TEST locking_overlapped_coremask_via_rpc 00:05:03.446 ************************************ 00:05:03.446 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:03.446 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=670269 00:05:03.446 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:03.446 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 670269 /var/tmp/spdk.sock 00:05:03.446 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 670269 ']' 00:05:03.446 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.446 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.446 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.446 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.446 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.704 [2024-07-15 15:57:49.472891] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:03.704 [2024-07-15 15:57:49.473019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670269 ] 00:05:03.704 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.704 [2024-07-15 15:57:49.531372] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.704 [2024-07-15 15:57:49.531416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:03.704 [2024-07-15 15:57:49.642366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.704 [2024-07-15 15:57:49.642432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.704 [2024-07-15 15:57:49.642436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.962 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.962 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:03.962 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=670285 00:05:03.962 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 670285 /var/tmp/spdk2.sock 00:05:03.962 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 670285 ']' 00:05:03.962 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.962 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.962 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.962 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:03.962 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.962 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.962 [2024-07-15 15:57:49.933799] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:03.962 [2024-07-15 15:57:49.933884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670285 ] 00:05:03.962 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.220 [2024-07-15 15:57:50.023436] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:04.220 [2024-07-15 15:57:50.023486] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:04.478 [2024-07-15 15:57:50.246514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.478 [2024-07-15 15:57:50.246580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:04.478 [2024-07-15 15:57:50.246582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.043 [2024-07-15 15:57:50.887054] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 670269 has claimed it. 00:05:05.043 request: 00:05:05.043 { 00:05:05.043 "method": "framework_enable_cpumask_locks", 00:05:05.043 "req_id": 1 00:05:05.043 } 00:05:05.043 Got JSON-RPC error response 00:05:05.043 response: 00:05:05.043 { 00:05:05.043 "code": -32603, 00:05:05.043 "message": "Failed to claim CPU core: 2" 00:05:05.043 } 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 670269 /var/tmp/spdk.sock 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 670269 ']' 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.043 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.300 15:57:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.300 15:57:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:05.300 15:57:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 670285 /var/tmp/spdk2.sock 00:05:05.300 15:57:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 670285 ']' 00:05:05.300 15:57:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.300 15:57:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.300 15:57:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.300 15:57:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.300 15:57:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.558 15:57:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.558 15:57:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:05.558 15:57:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:05.558 15:57:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:05.558 15:57:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:05.558 15:57:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:05.558 00:05:05.558 real 0m1.984s 00:05:05.558 user 0m1.030s 00:05:05.558 sys 0m0.189s 00:05:05.558 15:57:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.558 15:57:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.558 ************************************ 00:05:05.558 END TEST locking_overlapped_coremask_via_rpc 00:05:05.558 ************************************ 00:05:05.558 15:57:51 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:05.558 15:57:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:05.558 15:57:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 670269 ]] 00:05:05.558 15:57:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 670269 00:05:05.558 15:57:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 670269 ']' 00:05:05.558 15:57:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 670269 00:05:05.558 15:57:51 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:05.558 15:57:51 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.558 15:57:51 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 670269 00:05:05.558 15:57:51 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.558 15:57:51 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.558 15:57:51 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 670269' 00:05:05.558 killing process with pid 670269 00:05:05.558 15:57:51 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 670269 00:05:05.558 15:57:51 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 670269 00:05:06.124 15:57:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 670285 ]] 00:05:06.124 15:57:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 670285 00:05:06.124 15:57:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 670285 ']' 00:05:06.124 15:57:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 670285 00:05:06.124 15:57:51 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:06.124 15:57:51 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.124 15:57:51 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 670285 00:05:06.124 15:57:51 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:06.124 15:57:51 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:06.124 15:57:51 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 670285' 00:05:06.124 killing process with pid 670285 00:05:06.124 15:57:51 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 670285 00:05:06.124 15:57:51 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 670285 00:05:06.383 15:57:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:06.383 15:57:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:06.383 15:57:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 670269 ]] 00:05:06.383 15:57:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 670269 00:05:06.383 15:57:52 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 670269 ']' 00:05:06.383 15:57:52 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 670269 00:05:06.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (670269) - No such process 00:05:06.383 15:57:52 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 670269 is not found' 00:05:06.383 Process with pid 670269 is not found 00:05:06.383 15:57:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 670285 ]] 00:05:06.383 15:57:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 670285 00:05:06.383 15:57:52 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 670285 ']' 00:05:06.383 15:57:52 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 670285 00:05:06.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (670285) - No such process 00:05:06.383 15:57:52 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 670285 is not found' 00:05:06.383 Process with pid 670285 is not found 00:05:06.383 15:57:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:06.383 00:05:06.383 real 0m15.800s 00:05:06.383 user 0m27.632s 00:05:06.383 sys 0m5.054s 00:05:06.383 15:57:52 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.383 15:57:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.383 ************************************ 00:05:06.383 END TEST cpu_locks 00:05:06.383 ************************************ 00:05:06.641 15:57:52 event -- common/autotest_common.sh@1142 -- # return 0 00:05:06.641 00:05:06.641 real 0m39.704s 00:05:06.641 user 1m15.494s 00:05:06.641 sys 0m9.094s 00:05:06.641 15:57:52 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.641 15:57:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.641 ************************************ 00:05:06.641 END TEST event 00:05:06.641 ************************************ 00:05:06.641 15:57:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:06.641 15:57:52 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:06.641 15:57:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.641 15:57:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.642 15:57:52 -- common/autotest_common.sh@10 -- # set +x 00:05:06.642 ************************************ 00:05:06.642 START TEST thread 00:05:06.642 ************************************ 00:05:06.642 15:57:52 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:06.642 * Looking for test storage... 00:05:06.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:06.642 15:57:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:06.642 15:57:52 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:06.642 15:57:52 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.642 15:57:52 thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.642 ************************************ 00:05:06.642 START TEST thread_poller_perf 00:05:06.642 ************************************ 00:05:06.642 15:57:52 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:06.642 [2024-07-15 15:57:52.530569] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:06.642 [2024-07-15 15:57:52.530630] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670770 ] 00:05:06.642 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.642 [2024-07-15 15:57:52.588598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.901 [2024-07-15 15:57:52.692238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.901 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:07.835 ====================================== 00:05:07.835 busy:2707257405 (cyc) 00:05:07.835 total_run_count: 364000 00:05:07.835 tsc_hz: 2700000000 (cyc) 00:05:07.835 ====================================== 00:05:07.835 poller_cost: 7437 (cyc), 2754 (nsec) 00:05:07.835 00:05:07.835 real 0m1.285s 00:05:07.835 user 0m1.200s 00:05:07.835 sys 0m0.080s 00:05:07.835 15:57:53 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.835 15:57:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:07.835 ************************************ 00:05:07.835 END TEST thread_poller_perf 00:05:07.835 ************************************ 00:05:07.835 15:57:53 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:07.835 15:57:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:07.835 15:57:53 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:07.835 15:57:53 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.836 15:57:53 thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.094 ************************************ 00:05:08.094 START TEST thread_poller_perf 00:05:08.094 ************************************ 00:05:08.094 15:57:53 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:08.094 [2024-07-15 15:57:53.864538] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:08.094 [2024-07-15 15:57:53.864600] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670926 ] 00:05:08.094 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.094 [2024-07-15 15:57:53.921331] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.094 [2024-07-15 15:57:54.021348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.094 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:09.473 ====================================== 00:05:09.473 busy:2702357400 (cyc) 00:05:09.473 total_run_count: 4834000 00:05:09.473 tsc_hz: 2700000000 (cyc) 00:05:09.473 ====================================== 00:05:09.473 poller_cost: 559 (cyc), 207 (nsec) 00:05:09.473 00:05:09.473 real 0m1.281s 00:05:09.473 user 0m1.207s 00:05:09.473 sys 0m0.069s 00:05:09.473 15:57:55 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.473 15:57:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.473 ************************************ 00:05:09.473 END TEST thread_poller_perf 00:05:09.473 ************************************ 00:05:09.473 15:57:55 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:09.473 15:57:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:09.473 00:05:09.473 real 0m2.718s 00:05:09.473 user 0m2.476s 00:05:09.473 sys 0m0.242s 00:05:09.473 15:57:55 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.473 15:57:55 thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.473 ************************************ 00:05:09.473 END TEST thread 00:05:09.473 ************************************ 00:05:09.473 15:57:55 -- common/autotest_common.sh@1142 -- # return 0 00:05:09.473 15:57:55 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:09.473 15:57:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.473 15:57:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.473 15:57:55 -- common/autotest_common.sh@10 -- # set +x 00:05:09.473 ************************************ 00:05:09.473 START TEST accel 00:05:09.473 ************************************ 00:05:09.473 15:57:55 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:09.473 * Looking for test storage... 00:05:09.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:09.473 15:57:55 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:09.473 15:57:55 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:09.473 15:57:55 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:09.473 15:57:55 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=671119 00:05:09.473 15:57:55 accel -- accel/accel.sh@63 -- # waitforlisten 671119 00:05:09.473 15:57:55 accel -- common/autotest_common.sh@829 -- # '[' -z 671119 ']' 00:05:09.473 15:57:55 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.473 15:57:55 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:09.473 15:57:55 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:09.473 15:57:55 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.473 15:57:55 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.473 15:57:55 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.473 15:57:55 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.473 15:57:55 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.473 15:57:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.473 15:57:55 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.473 15:57:55 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.473 15:57:55 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.473 15:57:55 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:09.473 15:57:55 accel -- accel/accel.sh@41 -- # jq -r . 00:05:09.473 [2024-07-15 15:57:55.316567] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:09.473 [2024-07-15 15:57:55.316635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671119 ] 00:05:09.473 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.473 [2024-07-15 15:57:55.376580] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.731 [2024-07-15 15:57:55.482326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.731 15:57:55 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.731 15:57:55 accel -- common/autotest_common.sh@862 -- # return 0 00:05:09.731 15:57:55 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:09.731 15:57:55 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:09.731 15:57:55 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:09.731 15:57:55 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:09.731 15:57:55 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:09.731 15:57:55 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:09.731 15:57:55 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:09.731 15:57:55 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.731 15:57:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.731 15:57:55 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.990 15:57:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.990 15:57:55 accel -- accel/accel.sh@72 -- # IFS== 00:05:09.990 15:57:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:09.990 15:57:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.990 15:57:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.990 15:57:55 accel -- accel/accel.sh@72 -- # IFS== 00:05:09.990 15:57:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:09.990 15:57:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.990 15:57:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # IFS== 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:09.991 15:57:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.991 15:57:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # IFS== 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:09.991 15:57:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.991 15:57:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # IFS== 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:09.991 15:57:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.991 15:57:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # IFS== 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:09.991 15:57:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.991 15:57:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # IFS== 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:09.991 15:57:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.991 15:57:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # IFS== 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:09.991 15:57:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.991 15:57:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # IFS== 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:09.991 15:57:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.991 15:57:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # IFS== 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:09.991 15:57:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.991 15:57:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # IFS== 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:09.991 15:57:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.991 15:57:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # IFS== 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:09.991 15:57:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.991 15:57:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # IFS== 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:09.991 15:57:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.991 15:57:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # IFS== 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:09.991 15:57:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.991 15:57:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # IFS== 00:05:09.991 15:57:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:09.991 15:57:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.991 15:57:55 accel -- accel/accel.sh@75 -- # killprocess 671119 00:05:09.991 15:57:55 accel -- common/autotest_common.sh@948 -- # '[' -z 671119 ']' 00:05:09.991 15:57:55 accel -- common/autotest_common.sh@952 -- # kill -0 671119 00:05:09.991 15:57:55 accel -- common/autotest_common.sh@953 -- # uname 00:05:09.991 15:57:55 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.991 15:57:55 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 671119 00:05:09.991 15:57:55 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.991 15:57:55 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.991 15:57:55 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 671119' 00:05:09.991 killing process with pid 671119 00:05:09.991 15:57:55 accel -- common/autotest_common.sh@967 -- # kill 671119 00:05:09.991 15:57:55 accel -- common/autotest_common.sh@972 -- # wait 671119 00:05:10.248 15:57:56 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:10.248 15:57:56 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:10.248 15:57:56 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:10.248 15:57:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.248 15:57:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:10.248 15:57:56 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:10.248 15:57:56 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:10.248 15:57:56 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:10.248 15:57:56 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.248 15:57:56 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.248 15:57:56 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.248 15:57:56 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.248 15:57:56 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.248 15:57:56 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:10.248 15:57:56 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:10.248 15:57:56 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.248 15:57:56 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:10.507 15:57:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:10.507 15:57:56 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:10.507 15:57:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:10.507 15:57:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.507 15:57:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:10.507 ************************************ 00:05:10.507 START TEST accel_missing_filename 00:05:10.507 ************************************ 00:05:10.507 15:57:56 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:10.507 15:57:56 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:10.507 15:57:56 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:10.507 15:57:56 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:10.507 15:57:56 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.507 15:57:56 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:10.507 15:57:56 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.507 15:57:56 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:10.507 15:57:56 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:10.507 15:57:56 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:10.507 15:57:56 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.507 15:57:56 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.507 15:57:56 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.507 15:57:56 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.507 15:57:56 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.507 15:57:56 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:10.507 15:57:56 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:10.507 [2024-07-15 15:57:56.310445] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:10.507 [2024-07-15 15:57:56.310508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671287 ] 00:05:10.507 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.507 [2024-07-15 15:57:56.367430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.507 [2024-07-15 15:57:56.470492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.766 [2024-07-15 15:57:56.527857] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:10.766 [2024-07-15 15:57:56.603350] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:10.766 A filename is required. 00:05:10.766 15:57:56 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:10.766 15:57:56 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:10.766 15:57:56 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:10.766 15:57:56 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:10.766 15:57:56 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:10.766 15:57:56 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:10.766 00:05:10.766 real 0m0.422s 00:05:10.766 user 0m0.321s 00:05:10.766 sys 0m0.134s 00:05:10.766 15:57:56 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.766 15:57:56 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:10.766 ************************************ 00:05:10.766 END TEST accel_missing_filename 00:05:10.766 ************************************ 00:05:10.766 15:57:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:10.766 15:57:56 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:10.766 15:57:56 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:10.766 15:57:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.766 15:57:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:10.766 ************************************ 00:05:10.766 START TEST accel_compress_verify 00:05:10.766 ************************************ 00:05:10.766 15:57:56 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:10.766 15:57:56 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:10.766 15:57:56 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:10.766 15:57:56 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:10.766 15:57:56 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.766 15:57:56 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:10.766 15:57:56 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.766 15:57:56 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:10.766 15:57:56 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:10.766 15:57:56 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:10.766 15:57:56 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.766 15:57:56 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.766 15:57:56 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.766 15:57:56 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.766 15:57:56 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.766 15:57:56 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:10.766 15:57:56 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:11.027 [2024-07-15 15:57:56.782330] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:11.027 [2024-07-15 15:57:56.782398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671324 ] 00:05:11.027 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.027 [2024-07-15 15:57:56.840535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.027 [2024-07-15 15:57:56.944678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.027 [2024-07-15 15:57:56.999905] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:11.338 [2024-07-15 15:57:57.081032] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:11.338 00:05:11.338 Compression does not support the verify option, aborting. 00:05:11.338 15:57:57 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:11.338 15:57:57 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:11.338 15:57:57 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:11.338 15:57:57 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:11.338 15:57:57 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:11.338 15:57:57 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:11.338 00:05:11.338 real 0m0.431s 00:05:11.338 user 0m0.324s 00:05:11.338 sys 0m0.141s 00:05:11.338 15:57:57 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.338 15:57:57 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:11.338 ************************************ 00:05:11.338 END TEST accel_compress_verify 00:05:11.338 ************************************ 00:05:11.338 15:57:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:11.338 15:57:57 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:11.338 15:57:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:11.338 15:57:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.338 15:57:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.338 ************************************ 00:05:11.338 START TEST accel_wrong_workload 00:05:11.338 ************************************ 00:05:11.338 15:57:57 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:11.338 15:57:57 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:11.338 15:57:57 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:11.338 15:57:57 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:11.338 15:57:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.338 15:57:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:11.338 15:57:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.338 15:57:57 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:11.338 15:57:57 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:11.338 15:57:57 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:11.338 15:57:57 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.338 15:57:57 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.338 15:57:57 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.338 15:57:57 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.338 15:57:57 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.338 15:57:57 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:11.338 15:57:57 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:11.338 Unsupported workload type: foobar 00:05:11.338 [2024-07-15 15:57:57.258462] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:11.338 accel_perf options: 00:05:11.338 [-h help message] 00:05:11.338 [-q queue depth per core] 00:05:11.338 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:11.338 [-T number of threads per core 00:05:11.338 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:11.338 [-t time in seconds] 00:05:11.338 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:11.338 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:11.338 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:11.338 [-l for compress/decompress workloads, name of uncompressed input file 00:05:11.338 [-S for crc32c workload, use this seed value (default 0) 00:05:11.338 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:11.338 [-f for fill workload, use this BYTE value (default 255) 00:05:11.338 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:11.338 [-y verify result if this switch is on] 00:05:11.338 [-a tasks to allocate per core (default: same value as -q)] 00:05:11.338 Can be used to spread operations across a wider range of memory. 00:05:11.338 15:57:57 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:11.338 15:57:57 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:11.338 15:57:57 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:11.338 15:57:57 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:11.338 00:05:11.338 real 0m0.024s 00:05:11.338 user 0m0.013s 00:05:11.338 sys 0m0.011s 00:05:11.338 15:57:57 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.338 15:57:57 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:11.338 ************************************ 00:05:11.338 END TEST accel_wrong_workload 00:05:11.338 ************************************ 00:05:11.338 Error: writing output failed: Broken pipe 00:05:11.338 15:57:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:11.338 15:57:57 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:11.338 15:57:57 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:11.338 15:57:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.338 15:57:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.597 ************************************ 00:05:11.597 START TEST accel_negative_buffers 00:05:11.597 ************************************ 00:05:11.597 15:57:57 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:11.597 15:57:57 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:11.597 15:57:57 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:11.597 15:57:57 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:11.597 15:57:57 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.597 15:57:57 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:11.598 15:57:57 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.598 15:57:57 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:11.598 15:57:57 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:11.598 15:57:57 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:11.598 15:57:57 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.598 15:57:57 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.598 15:57:57 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.598 15:57:57 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.598 15:57:57 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.598 15:57:57 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:11.598 15:57:57 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:11.598 -x option must be non-negative. 00:05:11.598 [2024-07-15 15:57:57.326114] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:11.598 accel_perf options: 00:05:11.598 [-h help message] 00:05:11.598 [-q queue depth per core] 00:05:11.598 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:11.598 [-T number of threads per core 00:05:11.598 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:11.598 [-t time in seconds] 00:05:11.598 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:11.598 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:11.598 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:11.598 [-l for compress/decompress workloads, name of uncompressed input file 00:05:11.598 [-S for crc32c workload, use this seed value (default 0) 00:05:11.598 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:11.598 [-f for fill workload, use this BYTE value (default 255) 00:05:11.598 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:11.598 [-y verify result if this switch is on] 00:05:11.598 [-a tasks to allocate per core (default: same value as -q)] 00:05:11.598 Can be used to spread operations across a wider range of memory. 00:05:11.598 15:57:57 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:11.598 15:57:57 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:11.598 15:57:57 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:11.598 15:57:57 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:11.598 00:05:11.598 real 0m0.023s 00:05:11.598 user 0m0.012s 00:05:11.598 sys 0m0.011s 00:05:11.598 15:57:57 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.598 15:57:57 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:11.598 ************************************ 00:05:11.598 END TEST accel_negative_buffers 00:05:11.598 ************************************ 00:05:11.598 Error: writing output failed: Broken pipe 00:05:11.598 15:57:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:11.598 15:57:57 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:11.598 15:57:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:11.598 15:57:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.598 15:57:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.598 ************************************ 00:05:11.598 START TEST accel_crc32c 00:05:11.598 ************************************ 00:05:11.598 15:57:57 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:11.598 15:57:57 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:11.598 15:57:57 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:11.598 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.598 15:57:57 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:11.598 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.598 15:57:57 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:11.598 15:57:57 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:11.598 15:57:57 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.598 15:57:57 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.598 15:57:57 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.598 15:57:57 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.598 15:57:57 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.598 15:57:57 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:11.598 15:57:57 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:11.598 [2024-07-15 15:57:57.394408] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:11.598 [2024-07-15 15:57:57.394472] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671500 ] 00:05:11.598 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.598 [2024-07-15 15:57:57.452813] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.598 [2024-07-15 15:57:57.555822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.857 15:57:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.857 15:57:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.857 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.857 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.857 15:57:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.857 15:57:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.857 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.857 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.857 15:57:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.858 15:57:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:13.237 15:57:58 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:13.237 00:05:13.237 real 0m1.429s 00:05:13.237 user 0m1.295s 00:05:13.237 sys 0m0.136s 00:05:13.237 15:57:58 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.237 15:57:58 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:13.237 ************************************ 00:05:13.237 END TEST accel_crc32c 00:05:13.237 ************************************ 00:05:13.237 15:57:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:13.237 15:57:58 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:13.237 15:57:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:13.237 15:57:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.237 15:57:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:13.237 ************************************ 00:05:13.237 START TEST accel_crc32c_C2 00:05:13.237 ************************************ 00:05:13.237 15:57:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:13.237 15:57:58 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:13.237 15:57:58 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:13.237 15:57:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.237 15:57:58 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:13.237 15:57:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.237 15:57:58 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:13.237 15:57:58 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:13.237 15:57:58 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:13.237 15:57:58 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:13.237 15:57:58 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.237 15:57:58 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.237 15:57:58 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:13.237 15:57:58 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:13.237 15:57:58 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:13.237 [2024-07-15 15:57:58.873032] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:13.237 [2024-07-15 15:57:58.873094] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671659 ] 00:05:13.237 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.237 [2024-07-15 15:57:58.932549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.237 [2024-07-15 15:57:59.037465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.237 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:13.237 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.238 15:57:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.616 00:05:14.616 real 0m1.426s 00:05:14.616 user 0m1.294s 00:05:14.616 sys 0m0.133s 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.616 15:58:00 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:14.616 ************************************ 00:05:14.616 END TEST accel_crc32c_C2 00:05:14.616 ************************************ 00:05:14.616 15:58:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:14.616 15:58:00 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:14.616 15:58:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:14.616 15:58:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.616 15:58:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.616 ************************************ 00:05:14.616 START TEST accel_copy 00:05:14.616 ************************************ 00:05:14.616 15:58:00 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:14.616 [2024-07-15 15:58:00.346344] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:14.616 [2024-07-15 15:58:00.346405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671930 ] 00:05:14.616 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.616 [2024-07-15 15:58:00.403358] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.616 [2024-07-15 15:58:00.506598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.616 15:58:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:15.993 15:58:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.993 00:05:15.993 real 0m1.425s 00:05:15.993 user 0m1.284s 00:05:15.993 sys 0m0.142s 00:05:15.993 15:58:01 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.993 15:58:01 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:15.993 ************************************ 00:05:15.993 END TEST accel_copy 00:05:15.993 ************************************ 00:05:15.993 15:58:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:15.993 15:58:01 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:15.993 15:58:01 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:15.993 15:58:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.993 15:58:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:15.993 ************************************ 00:05:15.993 START TEST accel_fill 00:05:15.993 ************************************ 00:05:15.993 15:58:01 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:15.993 15:58:01 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:15.993 15:58:01 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:15.993 15:58:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:15.993 15:58:01 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:15.993 15:58:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:15.993 15:58:01 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:15.993 15:58:01 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:15.993 15:58:01 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.993 15:58:01 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.993 15:58:01 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.993 15:58:01 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.993 15:58:01 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.993 15:58:01 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:15.993 15:58:01 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:15.993 [2024-07-15 15:58:01.819399] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:15.993 [2024-07-15 15:58:01.819462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672093 ] 00:05:15.993 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.993 [2024-07-15 15:58:01.875632] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.993 [2024-07-15 15:58:01.979827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.252 15:58:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:16.252 15:58:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:16.252 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:16.252 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:16.252 15:58:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:16.252 15:58:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:16.252 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:16.252 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:16.252 15:58:02 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:16.252 15:58:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:16.252 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:16.253 15:58:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:17.628 15:58:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.628 00:05:17.628 real 0m1.434s 00:05:17.628 user 0m1.306s 00:05:17.628 sys 0m0.129s 00:05:17.628 15:58:03 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.628 15:58:03 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:17.628 ************************************ 00:05:17.628 END TEST accel_fill 00:05:17.628 ************************************ 00:05:17.628 15:58:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:17.628 15:58:03 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:17.628 15:58:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:17.628 15:58:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.628 15:58:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.628 ************************************ 00:05:17.628 START TEST accel_copy_crc32c 00:05:17.628 ************************************ 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:17.628 [2024-07-15 15:58:03.303039] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:17.628 [2024-07-15 15:58:03.303107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672245 ] 00:05:17.628 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.628 [2024-07-15 15:58:03.360637] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.628 [2024-07-15 15:58:03.462255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:17.628 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.629 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.629 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.629 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.629 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.629 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.629 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.629 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.629 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.629 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.629 15:58:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.007 00:05:19.007 real 0m1.424s 00:05:19.007 user 0m1.293s 00:05:19.007 sys 0m0.132s 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.007 15:58:04 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:19.007 ************************************ 00:05:19.007 END TEST accel_copy_crc32c 00:05:19.007 ************************************ 00:05:19.007 15:58:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:19.007 15:58:04 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:19.007 15:58:04 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:19.007 15:58:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.007 15:58:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.007 ************************************ 00:05:19.007 START TEST accel_copy_crc32c_C2 00:05:19.007 ************************************ 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:19.008 [2024-07-15 15:58:04.779832] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:19.008 [2024-07-15 15:58:04.779893] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672467 ] 00:05:19.008 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.008 [2024-07-15 15:58:04.836604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.008 [2024-07-15 15:58:04.940728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.008 15:58:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.008 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:19.008 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.008 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.008 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.008 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:19.008 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.008 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.008 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.008 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.008 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.008 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.008 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.008 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:19.009 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.009 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.009 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.009 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.009 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.009 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.009 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.009 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.009 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.009 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.009 15:58:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.387 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.387 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.387 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.387 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.387 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.387 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.387 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.387 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.387 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.387 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.387 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.387 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.388 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.388 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.388 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.388 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.388 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.388 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.388 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.388 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.388 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.388 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.388 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.388 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.388 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.388 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:20.388 15:58:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.388 00:05:20.388 real 0m1.432s 00:05:20.388 user 0m1.298s 00:05:20.388 sys 0m0.136s 00:05:20.388 15:58:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.388 15:58:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:20.388 ************************************ 00:05:20.388 END TEST accel_copy_crc32c_C2 00:05:20.388 ************************************ 00:05:20.388 15:58:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:20.388 15:58:06 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:20.388 15:58:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:20.388 15:58:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.388 15:58:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.388 ************************************ 00:05:20.388 START TEST accel_dualcast 00:05:20.388 ************************************ 00:05:20.388 15:58:06 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:20.388 15:58:06 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:20.388 15:58:06 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:20.388 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.388 15:58:06 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:20.388 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.388 15:58:06 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:20.388 15:58:06 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:20.388 15:58:06 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.388 15:58:06 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.388 15:58:06 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.388 15:58:06 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.388 15:58:06 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.388 15:58:06 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:20.388 15:58:06 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:20.388 [2024-07-15 15:58:06.257455] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:20.388 [2024-07-15 15:58:06.257520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672675 ] 00:05:20.388 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.388 [2024-07-15 15:58:06.316705] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.647 [2024-07-15 15:58:06.417319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.647 15:58:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:20.647 15:58:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.647 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.647 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.647 15:58:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:20.647 15:58:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.647 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.647 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.647 15:58:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:20.647 15:58:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.648 15:58:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:22.027 15:58:07 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.027 00:05:22.027 real 0m1.429s 00:05:22.027 user 0m1.303s 00:05:22.027 sys 0m0.127s 00:05:22.027 15:58:07 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.027 15:58:07 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:22.027 ************************************ 00:05:22.027 END TEST accel_dualcast 00:05:22.027 ************************************ 00:05:22.027 15:58:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:22.027 15:58:07 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:22.027 15:58:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:22.027 15:58:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.027 15:58:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.027 ************************************ 00:05:22.027 START TEST accel_compare 00:05:22.027 ************************************ 00:05:22.027 15:58:07 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:22.027 15:58:07 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:22.027 15:58:07 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:22.027 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.027 15:58:07 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:22.027 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.027 15:58:07 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:22.027 15:58:07 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:22.027 15:58:07 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.027 15:58:07 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.027 15:58:07 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.027 15:58:07 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.027 15:58:07 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.027 15:58:07 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:22.027 15:58:07 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:22.027 [2024-07-15 15:58:07.738915] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:22.027 [2024-07-15 15:58:07.738984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672839 ] 00:05:22.028 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.028 [2024-07-15 15:58:07.796769] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.028 [2024-07-15 15:58:07.899944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.028 15:58:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:23.408 15:58:09 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.408 00:05:23.408 real 0m1.433s 00:05:23.408 user 0m1.305s 00:05:23.408 sys 0m0.130s 00:05:23.408 15:58:09 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.408 15:58:09 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:23.408 ************************************ 00:05:23.408 END TEST accel_compare 00:05:23.408 ************************************ 00:05:23.408 15:58:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.408 15:58:09 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:23.408 15:58:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:23.408 15:58:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.408 15:58:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.408 ************************************ 00:05:23.408 START TEST accel_xor 00:05:23.408 ************************************ 00:05:23.408 15:58:09 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:23.408 15:58:09 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:23.408 15:58:09 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:23.408 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.408 15:58:09 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:23.408 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.408 15:58:09 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:23.408 15:58:09 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:23.408 15:58:09 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.408 15:58:09 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.408 15:58:09 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.408 15:58:09 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.408 15:58:09 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.408 15:58:09 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:23.408 15:58:09 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:23.408 [2024-07-15 15:58:09.217362] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:23.408 [2024-07-15 15:58:09.217420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673009 ] 00:05:23.408 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.408 [2024-07-15 15:58:09.278432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.408 [2024-07-15 15:58:09.382978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.669 15:58:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.062 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.062 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.062 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.062 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.062 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.062 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.062 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.062 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.062 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.062 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.062 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.062 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.062 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.062 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.062 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.062 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.063 00:05:25.063 real 0m1.436s 00:05:25.063 user 0m1.300s 00:05:25.063 sys 0m0.138s 00:05:25.063 15:58:10 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.063 15:58:10 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:25.063 ************************************ 00:05:25.063 END TEST accel_xor 00:05:25.063 ************************************ 00:05:25.063 15:58:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.063 15:58:10 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:25.063 15:58:10 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:25.063 15:58:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.063 15:58:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.063 ************************************ 00:05:25.063 START TEST accel_xor 00:05:25.063 ************************************ 00:05:25.063 15:58:10 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:25.063 15:58:10 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:25.063 [2024-07-15 15:58:10.701899] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:25.063 [2024-07-15 15:58:10.701971] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673263 ] 00:05:25.063 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.063 [2024-07-15 15:58:10.760126] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.064 [2024-07-15 15:58:10.863830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.064 15:58:10 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.065 15:58:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:26.450 15:58:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.450 00:05:26.450 real 0m1.425s 00:05:26.450 user 0m1.299s 00:05:26.450 sys 0m0.128s 00:05:26.450 15:58:12 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.450 15:58:12 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:26.450 ************************************ 00:05:26.450 END TEST accel_xor 00:05:26.450 ************************************ 00:05:26.450 15:58:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:26.450 15:58:12 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:26.450 15:58:12 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:26.450 15:58:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.450 15:58:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.450 ************************************ 00:05:26.450 START TEST accel_dif_verify 00:05:26.450 ************************************ 00:05:26.450 15:58:12 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:26.450 15:58:12 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:26.450 15:58:12 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:26.450 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.450 15:58:12 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:26.450 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.450 15:58:12 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:26.450 15:58:12 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:26.450 15:58:12 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.450 15:58:12 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.450 15:58:12 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.450 15:58:12 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.450 15:58:12 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.450 15:58:12 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:26.450 15:58:12 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:26.450 [2024-07-15 15:58:12.177145] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:26.450 [2024-07-15 15:58:12.177215] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673425 ] 00:05:26.450 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.450 [2024-07-15 15:58:12.235407] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.450 [2024-07-15 15:58:12.337760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.450 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.450 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.451 15:58:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.829 15:58:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:27.829 15:58:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.829 15:58:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.829 15:58:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.829 15:58:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:27.829 15:58:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.829 15:58:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.829 15:58:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.829 15:58:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:27.830 15:58:13 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.830 00:05:27.830 real 0m1.431s 00:05:27.830 user 0m1.304s 00:05:27.830 sys 0m0.130s 00:05:27.830 15:58:13 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.830 15:58:13 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:27.830 ************************************ 00:05:27.830 END TEST accel_dif_verify 00:05:27.830 ************************************ 00:05:27.830 15:58:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.830 15:58:13 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:27.830 15:58:13 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:27.830 15:58:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.830 15:58:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.830 ************************************ 00:05:27.830 START TEST accel_dif_generate 00:05:27.830 ************************************ 00:05:27.830 15:58:13 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:27.830 15:58:13 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:27.830 15:58:13 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:27.830 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.830 15:58:13 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:27.830 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.830 15:58:13 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:27.830 15:58:13 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:27.830 15:58:13 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.830 15:58:13 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.830 15:58:13 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.830 15:58:13 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.830 15:58:13 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.830 15:58:13 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:27.830 15:58:13 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:27.830 [2024-07-15 15:58:13.655538] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:27.830 [2024-07-15 15:58:13.655608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673587 ] 00:05:27.830 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.830 [2024-07-15 15:58:13.714300] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.830 [2024-07-15 15:58:13.824493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.090 15:58:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:29.469 15:58:15 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.469 00:05:29.469 real 0m1.447s 00:05:29.469 user 0m1.306s 00:05:29.469 sys 0m0.144s 00:05:29.469 15:58:15 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.469 15:58:15 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:29.469 ************************************ 00:05:29.469 END TEST accel_dif_generate 00:05:29.469 ************************************ 00:05:29.469 15:58:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:29.469 15:58:15 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:29.469 15:58:15 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:29.469 15:58:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.469 15:58:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.469 ************************************ 00:05:29.469 START TEST accel_dif_generate_copy 00:05:29.469 ************************************ 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:29.469 [2024-07-15 15:58:15.151625] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:29.469 [2024-07-15 15:58:15.151690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673855 ] 00:05:29.469 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.469 [2024-07-15 15:58:15.211134] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.469 [2024-07-15 15:58:15.315579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.469 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.470 15:58:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.878 00:05:30.878 real 0m1.432s 00:05:30.878 user 0m1.301s 00:05:30.878 sys 0m0.133s 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.878 15:58:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:30.878 ************************************ 00:05:30.878 END TEST accel_dif_generate_copy 00:05:30.878 ************************************ 00:05:30.878 15:58:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.878 15:58:16 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:30.878 15:58:16 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:30.878 15:58:16 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:30.878 15:58:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.878 15:58:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.878 ************************************ 00:05:30.878 START TEST accel_comp 00:05:30.878 ************************************ 00:05:30.878 15:58:16 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:30.878 15:58:16 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:30.878 15:58:16 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:30.878 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.878 15:58:16 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:30.878 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.878 15:58:16 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:30.878 15:58:16 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:30.878 15:58:16 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.878 15:58:16 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.878 15:58:16 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.878 15:58:16 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.878 15:58:16 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.878 15:58:16 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:30.878 15:58:16 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:30.878 [2024-07-15 15:58:16.634147] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:30.878 [2024-07-15 15:58:16.634212] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674014 ] 00:05:30.878 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.878 [2024-07-15 15:58:16.689986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.878 [2024-07-15 15:58:16.791319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.878 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:30.878 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.878 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.879 15:58:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:32.258 15:58:18 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.258 00:05:32.258 real 0m1.421s 00:05:32.258 user 0m1.299s 00:05:32.258 sys 0m0.124s 00:05:32.258 15:58:18 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.258 15:58:18 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:32.258 ************************************ 00:05:32.258 END TEST accel_comp 00:05:32.258 ************************************ 00:05:32.258 15:58:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:32.258 15:58:18 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:32.258 15:58:18 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:32.258 15:58:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.258 15:58:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.258 ************************************ 00:05:32.258 START TEST accel_decomp 00:05:32.258 ************************************ 00:05:32.258 15:58:18 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:32.258 15:58:18 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:32.258 15:58:18 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:32.258 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.258 15:58:18 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:32.258 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.258 15:58:18 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:32.258 15:58:18 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:32.258 15:58:18 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.258 15:58:18 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.258 15:58:18 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.258 15:58:18 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.258 15:58:18 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.258 15:58:18 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:32.258 15:58:18 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:32.258 [2024-07-15 15:58:18.105101] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:32.258 [2024-07-15 15:58:18.105164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674172 ] 00:05:32.258 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.258 [2024-07-15 15:58:18.163842] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.519 [2024-07-15 15:58:18.269272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.519 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.520 15:58:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:32.520 15:58:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.520 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.520 15:58:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:33.899 15:58:19 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.899 00:05:33.899 real 0m1.423s 00:05:33.899 user 0m1.299s 00:05:33.899 sys 0m0.126s 00:05:33.899 15:58:19 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.899 15:58:19 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:33.899 ************************************ 00:05:33.899 END TEST accel_decomp 00:05:33.899 ************************************ 00:05:33.899 15:58:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.899 15:58:19 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:33.899 15:58:19 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:33.899 15:58:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.899 15:58:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.899 ************************************ 00:05:33.899 START TEST accel_decomp_full 00:05:33.899 ************************************ 00:05:33.899 15:58:19 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:33.899 [2024-07-15 15:58:19.577797] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:33.899 [2024-07-15 15:58:19.577862] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674444 ] 00:05:33.899 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.899 [2024-07-15 15:58:19.635485] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.899 [2024-07-15 15:58:19.739040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:33.899 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.900 15:58:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:35.277 15:58:20 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.277 00:05:35.277 real 0m1.442s 00:05:35.277 user 0m1.305s 00:05:35.277 sys 0m0.138s 00:05:35.277 15:58:20 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.277 15:58:21 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:35.277 ************************************ 00:05:35.277 END TEST accel_decomp_full 00:05:35.277 ************************************ 00:05:35.277 15:58:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:35.277 15:58:21 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:35.277 15:58:21 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:35.277 15:58:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.277 15:58:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.277 ************************************ 00:05:35.277 START TEST accel_decomp_mcore 00:05:35.277 ************************************ 00:05:35.277 15:58:21 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:35.277 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:35.277 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:35.277 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.277 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:35.277 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.277 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:35.277 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:35.277 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.277 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.277 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.277 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.277 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.278 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:35.278 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:35.278 [2024-07-15 15:58:21.069234] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:35.278 [2024-07-15 15:58:21.069292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674602 ] 00:05:35.278 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.278 [2024-07-15 15:58:21.125586] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:35.278 [2024-07-15 15:58:21.231943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.278 [2024-07-15 15:58:21.231997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.278 [2024-07-15 15:58:21.232066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.278 [2024-07-15 15:58:21.232069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.538 15:58:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.917 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.917 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.917 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.917 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.917 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.917 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.917 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.917 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.917 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.917 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.917 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.917 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.917 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.917 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.917 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.917 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.918 00:05:36.918 real 0m1.455s 00:05:36.918 user 0m4.763s 00:05:36.918 sys 0m0.149s 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.918 15:58:22 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:36.918 ************************************ 00:05:36.918 END TEST accel_decomp_mcore 00:05:36.918 ************************************ 00:05:36.918 15:58:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.918 15:58:22 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:36.918 15:58:22 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:36.918 15:58:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.918 15:58:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.918 ************************************ 00:05:36.918 START TEST accel_decomp_full_mcore 00:05:36.918 ************************************ 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:36.918 [2024-07-15 15:58:22.572469] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:36.918 [2024-07-15 15:58:22.572530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674764 ] 00:05:36.918 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.918 [2024-07-15 15:58:22.630537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.918 [2024-07-15 15:58:22.737375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.918 [2024-07-15 15:58:22.737438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.918 [2024-07-15 15:58:22.737544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.918 [2024-07-15 15:58:22.737552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.918 15:58:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.296 00:05:38.296 real 0m1.454s 00:05:38.296 user 0m4.763s 00:05:38.296 sys 0m0.146s 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.296 15:58:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:38.296 ************************************ 00:05:38.296 END TEST accel_decomp_full_mcore 00:05:38.296 ************************************ 00:05:38.296 15:58:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.296 15:58:24 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:38.296 15:58:24 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:38.296 15:58:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.296 15:58:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.296 ************************************ 00:05:38.296 START TEST accel_decomp_mthread 00:05:38.296 ************************************ 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:38.296 [2024-07-15 15:58:24.071692] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:38.296 [2024-07-15 15:58:24.071762] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675041 ] 00:05:38.296 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.296 [2024-07-15 15:58:24.129509] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.296 [2024-07-15 15:58:24.232163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.296 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.297 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.556 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.556 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:38.556 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.556 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.556 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.556 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:38.556 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.556 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.556 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.556 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:38.556 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.556 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.556 15:58:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.493 00:05:39.493 real 0m1.436s 00:05:39.493 user 0m1.305s 00:05:39.493 sys 0m0.134s 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.493 15:58:25 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:39.493 ************************************ 00:05:39.493 END TEST accel_decomp_mthread 00:05:39.493 ************************************ 00:05:39.751 15:58:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.751 15:58:25 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:39.751 15:58:25 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:39.751 15:58:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.751 15:58:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.751 ************************************ 00:05:39.751 START TEST accel_decomp_full_mthread 00:05:39.751 ************************************ 00:05:39.751 15:58:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:39.751 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:39.751 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:39.751 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.751 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:39.751 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.751 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:39.751 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:39.751 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.751 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.751 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.752 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.752 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.752 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:39.752 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:39.752 [2024-07-15 15:58:25.557346] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:39.752 [2024-07-15 15:58:25.557409] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675197 ] 00:05:39.752 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.752 [2024-07-15 15:58:25.614355] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.752 [2024-07-15 15:58:25.718102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:40.009 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.010 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.010 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.010 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:40.010 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.010 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.010 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.010 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:40.010 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.010 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.010 15:58:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.388 15:58:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.388 15:58:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.388 15:58:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:41.388 15:58:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.388 00:05:41.388 real 0m1.463s 00:05:41.388 user 0m1.337s 00:05:41.388 sys 0m0.128s 00:05:41.389 15:58:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.389 15:58:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:41.389 ************************************ 00:05:41.389 END TEST accel_decomp_full_mthread 00:05:41.389 ************************************ 00:05:41.389 15:58:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.389 15:58:27 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:41.389 15:58:27 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:41.389 15:58:27 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:41.389 15:58:27 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:41.389 15:58:27 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.389 15:58:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.389 15:58:27 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.389 15:58:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.389 15:58:27 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.389 15:58:27 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.389 15:58:27 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.389 15:58:27 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:41.389 15:58:27 accel -- accel/accel.sh@41 -- # jq -r . 00:05:41.389 ************************************ 00:05:41.389 START TEST accel_dif_functional_tests 00:05:41.389 ************************************ 00:05:41.389 15:58:27 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:41.389 [2024-07-15 15:58:27.088218] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:41.389 [2024-07-15 15:58:27.088297] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675355 ] 00:05:41.389 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.389 [2024-07-15 15:58:27.141308] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.389 [2024-07-15 15:58:27.244008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.389 [2024-07-15 15:58:27.244074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.389 [2024-07-15 15:58:27.244077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.389 00:05:41.389 00:05:41.389 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.389 http://cunit.sourceforge.net/ 00:05:41.389 00:05:41.389 00:05:41.389 Suite: accel_dif 00:05:41.389 Test: verify: DIF generated, GUARD check ...passed 00:05:41.389 Test: verify: DIF generated, APPTAG check ...passed 00:05:41.389 Test: verify: DIF generated, REFTAG check ...passed 00:05:41.389 Test: verify: DIF not generated, GUARD check ...[2024-07-15 15:58:27.334041] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:41.389 passed 00:05:41.389 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 15:58:27.334107] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:41.389 passed 00:05:41.389 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 15:58:27.334139] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:41.389 passed 00:05:41.389 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:41.389 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 15:58:27.334201] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:41.389 passed 00:05:41.389 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:41.389 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:41.389 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:41.389 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 15:58:27.334335] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:41.389 passed 00:05:41.389 Test: verify copy: DIF generated, GUARD check ...passed 00:05:41.389 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:41.389 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:41.389 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 15:58:27.334494] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:41.389 passed 00:05:41.389 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 15:58:27.334531] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:41.389 passed 00:05:41.389 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 15:58:27.334562] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:41.389 passed 00:05:41.389 Test: generate copy: DIF generated, GUARD check ...passed 00:05:41.389 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:41.389 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:41.389 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:41.389 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:41.389 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:41.389 Test: generate copy: iovecs-len validate ...[2024-07-15 15:58:27.334782] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:41.389 passed 00:05:41.389 Test: generate copy: buffer alignment validate ...passed 00:05:41.389 00:05:41.389 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.389 suites 1 1 n/a 0 0 00:05:41.389 tests 26 26 26 0 0 00:05:41.389 asserts 115 115 115 0 n/a 00:05:41.389 00:05:41.389 Elapsed time = 0.002 seconds 00:05:41.649 00:05:41.649 real 0m0.519s 00:05:41.649 user 0m0.798s 00:05:41.649 sys 0m0.161s 00:05:41.649 15:58:27 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.649 15:58:27 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:41.649 ************************************ 00:05:41.649 END TEST accel_dif_functional_tests 00:05:41.649 ************************************ 00:05:41.649 15:58:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.649 00:05:41.649 real 0m32.378s 00:05:41.649 user 0m35.950s 00:05:41.649 sys 0m4.353s 00:05:41.649 15:58:27 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.649 15:58:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.649 ************************************ 00:05:41.649 END TEST accel 00:05:41.649 ************************************ 00:05:41.649 15:58:27 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.649 15:58:27 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:41.649 15:58:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.649 15:58:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.649 15:58:27 -- common/autotest_common.sh@10 -- # set +x 00:05:41.649 ************************************ 00:05:41.649 START TEST accel_rpc 00:05:41.649 ************************************ 00:05:41.649 15:58:27 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:41.907 * Looking for test storage... 00:05:41.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:41.907 15:58:27 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:41.907 15:58:27 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=675541 00:05:41.907 15:58:27 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:41.907 15:58:27 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 675541 00:05:41.907 15:58:27 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 675541 ']' 00:05:41.907 15:58:27 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.907 15:58:27 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.907 15:58:27 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.907 15:58:27 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.907 15:58:27 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.907 [2024-07-15 15:58:27.743184] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:41.907 [2024-07-15 15:58:27.743273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675541 ] 00:05:41.907 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.907 [2024-07-15 15:58:27.800780] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.907 [2024-07-15 15:58:27.905029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.166 15:58:27 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.166 15:58:27 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:42.166 15:58:27 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:42.166 15:58:27 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:42.166 15:58:27 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:42.166 15:58:27 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:42.166 15:58:27 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:42.166 15:58:27 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.166 15:58:27 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.166 15:58:27 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.166 ************************************ 00:05:42.166 START TEST accel_assign_opcode 00:05:42.166 ************************************ 00:05:42.166 15:58:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:42.166 15:58:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:42.166 15:58:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.166 15:58:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:42.166 [2024-07-15 15:58:27.969656] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:42.166 15:58:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.166 15:58:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:42.166 15:58:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.166 15:58:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:42.166 [2024-07-15 15:58:27.977669] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:42.166 15:58:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.166 15:58:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:42.166 15:58:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.166 15:58:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:42.425 15:58:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.425 15:58:28 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:42.425 15:58:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.425 15:58:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:42.425 15:58:28 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:42.425 15:58:28 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:42.425 15:58:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.425 software 00:05:42.425 00:05:42.425 real 0m0.282s 00:05:42.425 user 0m0.042s 00:05:42.425 sys 0m0.007s 00:05:42.425 15:58:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.425 15:58:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:42.425 ************************************ 00:05:42.425 END TEST accel_assign_opcode 00:05:42.425 ************************************ 00:05:42.425 15:58:28 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.425 15:58:28 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 675541 00:05:42.425 15:58:28 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 675541 ']' 00:05:42.425 15:58:28 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 675541 00:05:42.425 15:58:28 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:42.425 15:58:28 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.425 15:58:28 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 675541 00:05:42.425 15:58:28 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.425 15:58:28 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.425 15:58:28 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 675541' 00:05:42.425 killing process with pid 675541 00:05:42.425 15:58:28 accel_rpc -- common/autotest_common.sh@967 -- # kill 675541 00:05:42.425 15:58:28 accel_rpc -- common/autotest_common.sh@972 -- # wait 675541 00:05:42.993 00:05:42.993 real 0m1.063s 00:05:42.993 user 0m1.008s 00:05:42.993 sys 0m0.409s 00:05:42.993 15:58:28 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.993 15:58:28 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.993 ************************************ 00:05:42.993 END TEST accel_rpc 00:05:42.993 ************************************ 00:05:42.993 15:58:28 -- common/autotest_common.sh@1142 -- # return 0 00:05:42.993 15:58:28 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:42.993 15:58:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.993 15:58:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.993 15:58:28 -- common/autotest_common.sh@10 -- # set +x 00:05:42.993 ************************************ 00:05:42.993 START TEST app_cmdline 00:05:42.993 ************************************ 00:05:42.993 15:58:28 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:42.993 * Looking for test storage... 00:05:42.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:42.993 15:58:28 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:42.993 15:58:28 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=675747 00:05:42.993 15:58:28 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:42.993 15:58:28 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 675747 00:05:42.993 15:58:28 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 675747 ']' 00:05:42.993 15:58:28 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.993 15:58:28 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.993 15:58:28 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.993 15:58:28 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.993 15:58:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:42.993 [2024-07-15 15:58:28.850843] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:05:42.993 [2024-07-15 15:58:28.850931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675747 ] 00:05:42.993 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.993 [2024-07-15 15:58:28.907111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.252 [2024-07-15 15:58:29.014529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.252 15:58:29 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.252 15:58:29 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:43.252 15:58:29 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:43.510 { 00:05:43.510 "version": "SPDK v24.09-pre git sha1 255871c19", 00:05:43.510 "fields": { 00:05:43.510 "major": 24, 00:05:43.510 "minor": 9, 00:05:43.510 "patch": 0, 00:05:43.510 "suffix": "-pre", 00:05:43.510 "commit": "255871c19" 00:05:43.510 } 00:05:43.510 } 00:05:43.510 15:58:29 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:43.510 15:58:29 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:43.510 15:58:29 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:43.510 15:58:29 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:43.510 15:58:29 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:43.510 15:58:29 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.510 15:58:29 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:43.510 15:58:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:43.510 15:58:29 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:43.510 15:58:29 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.769 15:58:29 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:43.769 15:58:29 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:43.769 15:58:29 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:43.769 15:58:29 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:43.769 15:58:29 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:43.769 15:58:29 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:43.769 15:58:29 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.769 15:58:29 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:43.769 15:58:29 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.769 15:58:29 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:43.769 15:58:29 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.769 15:58:29 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:43.769 15:58:29 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:43.769 15:58:29 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:43.769 request: 00:05:43.769 { 00:05:43.769 "method": "env_dpdk_get_mem_stats", 00:05:43.769 "req_id": 1 00:05:43.769 } 00:05:43.769 Got JSON-RPC error response 00:05:43.769 response: 00:05:43.769 { 00:05:43.769 "code": -32601, 00:05:43.769 "message": "Method not found" 00:05:43.769 } 00:05:44.030 15:58:29 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:44.030 15:58:29 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.030 15:58:29 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.030 15:58:29 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.030 15:58:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 675747 00:05:44.030 15:58:29 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 675747 ']' 00:05:44.030 15:58:29 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 675747 00:05:44.030 15:58:29 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:44.030 15:58:29 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.030 15:58:29 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 675747 00:05:44.030 15:58:29 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.030 15:58:29 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.030 15:58:29 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 675747' 00:05:44.030 killing process with pid 675747 00:05:44.030 15:58:29 app_cmdline -- common/autotest_common.sh@967 -- # kill 675747 00:05:44.030 15:58:29 app_cmdline -- common/autotest_common.sh@972 -- # wait 675747 00:05:44.289 00:05:44.289 real 0m1.481s 00:05:44.289 user 0m1.840s 00:05:44.289 sys 0m0.428s 00:05:44.289 15:58:30 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.289 15:58:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.289 ************************************ 00:05:44.289 END TEST app_cmdline 00:05:44.289 ************************************ 00:05:44.289 15:58:30 -- common/autotest_common.sh@1142 -- # return 0 00:05:44.289 15:58:30 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:44.289 15:58:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.289 15:58:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.289 15:58:30 -- common/autotest_common.sh@10 -- # set +x 00:05:44.289 ************************************ 00:05:44.289 START TEST version 00:05:44.289 ************************************ 00:05:44.289 15:58:30 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:44.547 * Looking for test storage... 00:05:44.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:44.547 15:58:30 version -- app/version.sh@17 -- # get_header_version major 00:05:44.547 15:58:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:44.547 15:58:30 version -- app/version.sh@14 -- # cut -f2 00:05:44.547 15:58:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:44.547 15:58:30 version -- app/version.sh@17 -- # major=24 00:05:44.547 15:58:30 version -- app/version.sh@18 -- # get_header_version minor 00:05:44.547 15:58:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:44.547 15:58:30 version -- app/version.sh@14 -- # cut -f2 00:05:44.547 15:58:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:44.547 15:58:30 version -- app/version.sh@18 -- # minor=9 00:05:44.547 15:58:30 version -- app/version.sh@19 -- # get_header_version patch 00:05:44.547 15:58:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:44.547 15:58:30 version -- app/version.sh@14 -- # cut -f2 00:05:44.547 15:58:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:44.547 15:58:30 version -- app/version.sh@19 -- # patch=0 00:05:44.547 15:58:30 version -- app/version.sh@20 -- # get_header_version suffix 00:05:44.547 15:58:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:44.547 15:58:30 version -- app/version.sh@14 -- # cut -f2 00:05:44.547 15:58:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:44.547 15:58:30 version -- app/version.sh@20 -- # suffix=-pre 00:05:44.547 15:58:30 version -- app/version.sh@22 -- # version=24.9 00:05:44.547 15:58:30 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:44.547 15:58:30 version -- app/version.sh@28 -- # version=24.9rc0 00:05:44.547 15:58:30 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:44.547 15:58:30 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:44.547 15:58:30 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:44.547 15:58:30 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:44.547 00:05:44.547 real 0m0.117s 00:05:44.547 user 0m0.057s 00:05:44.547 sys 0m0.082s 00:05:44.547 15:58:30 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.547 15:58:30 version -- common/autotest_common.sh@10 -- # set +x 00:05:44.547 ************************************ 00:05:44.547 END TEST version 00:05:44.547 ************************************ 00:05:44.547 15:58:30 -- common/autotest_common.sh@1142 -- # return 0 00:05:44.548 15:58:30 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:44.548 15:58:30 -- spdk/autotest.sh@198 -- # uname -s 00:05:44.548 15:58:30 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:44.548 15:58:30 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:44.548 15:58:30 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:44.548 15:58:30 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:44.548 15:58:30 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:44.548 15:58:30 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:44.548 15:58:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:44.548 15:58:30 -- common/autotest_common.sh@10 -- # set +x 00:05:44.548 15:58:30 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:44.548 15:58:30 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:44.548 15:58:30 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:44.548 15:58:30 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:44.548 15:58:30 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:44.548 15:58:30 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:44.548 15:58:30 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:44.548 15:58:30 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:44.548 15:58:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.548 15:58:30 -- common/autotest_common.sh@10 -- # set +x 00:05:44.548 ************************************ 00:05:44.548 START TEST nvmf_tcp 00:05:44.548 ************************************ 00:05:44.548 15:58:30 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:44.548 * Looking for test storage... 00:05:44.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:44.548 15:58:30 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.548 15:58:30 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.548 15:58:30 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.548 15:58:30 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.548 15:58:30 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.548 15:58:30 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.548 15:58:30 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:44.548 15:58:30 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:44.548 15:58:30 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:44.548 15:58:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:44.548 15:58:30 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:44.548 15:58:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:44.548 15:58:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.548 15:58:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.807 ************************************ 00:05:44.807 START TEST nvmf_example 00:05:44.807 ************************************ 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:44.807 * Looking for test storage... 00:05:44.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.807 15:58:30 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:44.808 15:58:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:05:46.715 Found 0000:09:00.0 (0x8086 - 0x159b) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:05:46.715 Found 0000:09:00.1 (0x8086 - 0x159b) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:05:46.715 Found net devices under 0000:09:00.0: cvl_0_0 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:05:46.715 Found net devices under 0000:09:00.1: cvl_0_1 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:46.715 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:46.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:46.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:05:46.716 00:05:46.716 --- 10.0.0.2 ping statistics --- 00:05:46.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:46.716 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:46.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:46.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:05:46.716 00:05:46.716 --- 10.0.0.1 ping statistics --- 00:05:46.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:46.716 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:46.716 15:58:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:46.974 15:58:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:46.974 15:58:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:46.974 15:58:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:46.974 15:58:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:46.974 15:58:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:46.974 15:58:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:46.974 15:58:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=677642 00:05:46.974 15:58:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:46.974 15:58:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:46.974 15:58:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 677642 00:05:46.974 15:58:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 677642 ']' 00:05:46.974 15:58:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.974 15:58:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.974 15:58:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.974 15:58:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.974 15:58:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:46.974 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:47.910 15:58:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:47.910 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.205 Initializing NVMe Controllers 00:06:00.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:00.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:00.205 Initialization complete. Launching workers. 00:06:00.205 ======================================================== 00:06:00.205 Latency(us) 00:06:00.205 Device Information : IOPS MiB/s Average min max 00:06:00.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15269.89 59.65 4190.84 876.94 20306.04 00:06:00.205 ======================================================== 00:06:00.205 Total : 15269.89 59.65 4190.84 876.94 20306.04 00:06:00.205 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:00.205 rmmod nvme_tcp 00:06:00.205 rmmod nvme_fabrics 00:06:00.205 rmmod nvme_keyring 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 677642 ']' 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 677642 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 677642 ']' 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 677642 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 677642 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 677642' 00:06:00.205 killing process with pid 677642 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 677642 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 677642 00:06:00.205 nvmf threads initialize successfully 00:06:00.205 bdev subsystem init successfully 00:06:00.205 created a nvmf target service 00:06:00.205 create targets's poll groups done 00:06:00.205 all subsystems of target started 00:06:00.205 nvmf target is running 00:06:00.205 all subsystems of target stopped 00:06:00.205 destroy targets's poll groups done 00:06:00.205 destroyed the nvmf target service 00:06:00.205 bdev subsystem finish successfully 00:06:00.205 nvmf threads destroy successfully 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:00.205 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:00.461 15:58:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:00.461 15:58:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:00.461 15:58:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.461 15:58:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:00.461 00:06:00.461 real 0m15.893s 00:06:00.461 user 0m45.356s 00:06:00.461 sys 0m3.233s 00:06:00.462 15:58:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.462 15:58:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:00.462 ************************************ 00:06:00.462 END TEST nvmf_example 00:06:00.462 ************************************ 00:06:00.719 15:58:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:00.719 15:58:46 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:00.719 15:58:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:00.719 15:58:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.719 15:58:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.719 ************************************ 00:06:00.719 START TEST nvmf_filesystem 00:06:00.719 ************************************ 00:06:00.719 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:00.719 * Looking for test storage... 00:06:00.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:00.720 #define SPDK_CONFIG_H 00:06:00.720 #define SPDK_CONFIG_APPS 1 00:06:00.720 #define SPDK_CONFIG_ARCH native 00:06:00.720 #undef SPDK_CONFIG_ASAN 00:06:00.720 #undef SPDK_CONFIG_AVAHI 00:06:00.720 #undef SPDK_CONFIG_CET 00:06:00.720 #define SPDK_CONFIG_COVERAGE 1 00:06:00.720 #define SPDK_CONFIG_CROSS_PREFIX 00:06:00.720 #undef SPDK_CONFIG_CRYPTO 00:06:00.720 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:00.720 #undef SPDK_CONFIG_CUSTOMOCF 00:06:00.720 #undef SPDK_CONFIG_DAOS 00:06:00.720 #define SPDK_CONFIG_DAOS_DIR 00:06:00.720 #define SPDK_CONFIG_DEBUG 1 00:06:00.720 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:00.720 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:00.720 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:00.720 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:00.720 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:00.720 #undef SPDK_CONFIG_DPDK_UADK 00:06:00.720 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:00.720 #define SPDK_CONFIG_EXAMPLES 1 00:06:00.720 #undef SPDK_CONFIG_FC 00:06:00.720 #define SPDK_CONFIG_FC_PATH 00:06:00.720 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:00.720 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:00.720 #undef SPDK_CONFIG_FUSE 00:06:00.720 #undef SPDK_CONFIG_FUZZER 00:06:00.720 #define SPDK_CONFIG_FUZZER_LIB 00:06:00.720 #undef SPDK_CONFIG_GOLANG 00:06:00.720 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:00.720 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:00.720 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:00.720 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:00.720 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:00.720 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:00.720 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:00.720 #define SPDK_CONFIG_IDXD 1 00:06:00.720 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:00.720 #undef SPDK_CONFIG_IPSEC_MB 00:06:00.720 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:00.720 #define SPDK_CONFIG_ISAL 1 00:06:00.720 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:00.720 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:00.720 #define SPDK_CONFIG_LIBDIR 00:06:00.720 #undef SPDK_CONFIG_LTO 00:06:00.720 #define SPDK_CONFIG_MAX_LCORES 128 00:06:00.720 #define SPDK_CONFIG_NVME_CUSE 1 00:06:00.720 #undef SPDK_CONFIG_OCF 00:06:00.720 #define SPDK_CONFIG_OCF_PATH 00:06:00.720 #define SPDK_CONFIG_OPENSSL_PATH 00:06:00.720 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:00.720 #define SPDK_CONFIG_PGO_DIR 00:06:00.720 #undef SPDK_CONFIG_PGO_USE 00:06:00.720 #define SPDK_CONFIG_PREFIX /usr/local 00:06:00.720 #undef SPDK_CONFIG_RAID5F 00:06:00.720 #undef SPDK_CONFIG_RBD 00:06:00.720 #define SPDK_CONFIG_RDMA 1 00:06:00.720 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:00.720 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:00.720 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:00.720 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:00.720 #define SPDK_CONFIG_SHARED 1 00:06:00.720 #undef SPDK_CONFIG_SMA 00:06:00.720 #define SPDK_CONFIG_TESTS 1 00:06:00.720 #undef SPDK_CONFIG_TSAN 00:06:00.720 #define SPDK_CONFIG_UBLK 1 00:06:00.720 #define SPDK_CONFIG_UBSAN 1 00:06:00.720 #undef SPDK_CONFIG_UNIT_TESTS 00:06:00.720 #undef SPDK_CONFIG_URING 00:06:00.720 #define SPDK_CONFIG_URING_PATH 00:06:00.720 #undef SPDK_CONFIG_URING_ZNS 00:06:00.720 #undef SPDK_CONFIG_USDT 00:06:00.720 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:00.720 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:00.720 #define SPDK_CONFIG_VFIO_USER 1 00:06:00.720 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:00.720 #define SPDK_CONFIG_VHOST 1 00:06:00.720 #define SPDK_CONFIG_VIRTIO 1 00:06:00.720 #undef SPDK_CONFIG_VTUNE 00:06:00.720 #define SPDK_CONFIG_VTUNE_DIR 00:06:00.720 #define SPDK_CONFIG_WERROR 1 00:06:00.720 #define SPDK_CONFIG_WPDK_DIR 00:06:00.720 #undef SPDK_CONFIG_XNVME 00:06:00.720 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:00.720 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 679359 ]] 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 679359 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.PXQIIt 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.PXQIIt/tests/target /tmp/spdk.PXQIIt 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=952066048 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4332363776 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=56575066112 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994725376 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5419659264 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30993985536 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997360640 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390187008 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398948352 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30997012480 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997364736 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=352256 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199468032 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199472128 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:00.721 * Looking for test storage... 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=56575066112 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7634251776 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:00.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.721 15:58:46 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:00.722 15:58:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:03.271 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:03.271 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:03.271 Found net devices under 0000:09:00.0: cvl_0_0 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:03.271 Found net devices under 0000:09:00.1: cvl_0_1 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:03.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:03.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:06:03.271 00:06:03.271 --- 10.0.0.2 ping statistics --- 00:06:03.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.271 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:03.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:03.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:06:03.271 00:06:03.271 --- 10.0.0.1 ping statistics --- 00:06:03.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.271 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:03.271 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:03.272 ************************************ 00:06:03.272 START TEST nvmf_filesystem_no_in_capsule 00:06:03.272 ************************************ 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=681040 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 681040 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 681040 ']' 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.272 15:58:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:03.272 [2024-07-15 15:58:48.970903] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:06:03.272 [2024-07-15 15:58:48.971006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:03.272 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.272 [2024-07-15 15:58:49.035806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.272 [2024-07-15 15:58:49.148492] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:03.272 [2024-07-15 15:58:49.148572] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:03.272 [2024-07-15 15:58:49.148586] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:03.272 [2024-07-15 15:58:49.148597] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:03.272 [2024-07-15 15:58:49.148608] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:03.272 [2024-07-15 15:58:49.148687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.272 [2024-07-15 15:58:49.148755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.272 [2024-07-15 15:58:49.148819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.272 [2024-07-15 15:58:49.148822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:03.529 [2024-07-15 15:58:49.308773] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:03.529 Malloc1 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:03.529 [2024-07-15 15:58:49.495173] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.529 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:03.529 { 00:06:03.529 "name": "Malloc1", 00:06:03.529 "aliases": [ 00:06:03.529 "779aa2c3-a3db-425d-8eeb-4eef44ccec96" 00:06:03.529 ], 00:06:03.529 "product_name": "Malloc disk", 00:06:03.529 "block_size": 512, 00:06:03.529 "num_blocks": 1048576, 00:06:03.529 "uuid": "779aa2c3-a3db-425d-8eeb-4eef44ccec96", 00:06:03.529 "assigned_rate_limits": { 00:06:03.529 "rw_ios_per_sec": 0, 00:06:03.529 "rw_mbytes_per_sec": 0, 00:06:03.529 "r_mbytes_per_sec": 0, 00:06:03.529 "w_mbytes_per_sec": 0 00:06:03.529 }, 00:06:03.529 "claimed": true, 00:06:03.529 "claim_type": "exclusive_write", 00:06:03.529 "zoned": false, 00:06:03.529 "supported_io_types": { 00:06:03.529 "read": true, 00:06:03.529 "write": true, 00:06:03.529 "unmap": true, 00:06:03.529 "flush": true, 00:06:03.529 "reset": true, 00:06:03.529 "nvme_admin": false, 00:06:03.529 "nvme_io": false, 00:06:03.529 "nvme_io_md": false, 00:06:03.529 "write_zeroes": true, 00:06:03.529 "zcopy": true, 00:06:03.529 "get_zone_info": false, 00:06:03.529 "zone_management": false, 00:06:03.529 "zone_append": false, 00:06:03.529 "compare": false, 00:06:03.529 "compare_and_write": false, 00:06:03.529 "abort": true, 00:06:03.529 "seek_hole": false, 00:06:03.529 "seek_data": false, 00:06:03.529 "copy": true, 00:06:03.529 "nvme_iov_md": false 00:06:03.529 }, 00:06:03.529 "memory_domains": [ 00:06:03.529 { 00:06:03.530 "dma_device_id": "system", 00:06:03.530 "dma_device_type": 1 00:06:03.530 }, 00:06:03.530 { 00:06:03.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.530 "dma_device_type": 2 00:06:03.530 } 00:06:03.530 ], 00:06:03.530 "driver_specific": {} 00:06:03.530 } 00:06:03.530 ]' 00:06:03.530 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:03.787 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:03.787 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:03.787 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:03.787 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:03.787 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:03.787 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:03.787 15:58:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:04.354 15:58:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:04.354 15:58:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:04.354 15:58:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:04.354 15:58:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:04.354 15:58:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:06.259 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:06.259 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:06.259 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:06.259 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:06.259 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:06.259 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:06.259 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:06.259 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:06.259 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:06.259 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:06.259 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:06.259 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:06.259 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:06.259 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:06.259 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:06.259 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:06.259 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:06.517 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:07.082 15:58:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:08.020 15:58:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:08.020 15:58:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:08.020 15:58:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:08.020 15:58:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.020 15:58:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:08.020 ************************************ 00:06:08.020 START TEST filesystem_ext4 00:06:08.020 ************************************ 00:06:08.020 15:58:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:08.020 15:58:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:08.020 15:58:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:08.020 15:58:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:08.020 15:58:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:08.020 15:58:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:08.020 15:58:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:08.020 15:58:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:08.020 15:58:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:08.020 15:58:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:08.020 15:58:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:08.020 mke2fs 1.46.5 (30-Dec-2021) 00:06:08.020 Discarding device blocks: 0/522240 done 00:06:08.020 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:08.020 Filesystem UUID: 55a57c56-b420-4feb-b4e5-a1f442df4807 00:06:08.020 Superblock backups stored on blocks: 00:06:08.020 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:08.020 00:06:08.020 Allocating group tables: 0/64 done 00:06:08.020 Writing inode tables: 0/64 done 00:06:08.958 Creating journal (8192 blocks): done 00:06:09.784 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:06:09.784 00:06:09.784 15:58:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:09.784 15:58:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 681040 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:10.721 00:06:10.721 real 0m2.789s 00:06:10.721 user 0m0.020s 00:06:10.721 sys 0m0.049s 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:10.721 ************************************ 00:06:10.721 END TEST filesystem_ext4 00:06:10.721 ************************************ 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:10.721 ************************************ 00:06:10.721 START TEST filesystem_btrfs 00:06:10.721 ************************************ 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:10.721 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:10.979 btrfs-progs v6.6.2 00:06:10.979 See https://btrfs.readthedocs.io for more information. 00:06:10.979 00:06:10.979 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:10.979 NOTE: several default settings have changed in version 5.15, please make sure 00:06:10.979 this does not affect your deployments: 00:06:10.979 - DUP for metadata (-m dup) 00:06:10.979 - enabled no-holes (-O no-holes) 00:06:10.979 - enabled free-space-tree (-R free-space-tree) 00:06:10.979 00:06:10.979 Label: (null) 00:06:10.979 UUID: 39523dec-5c7a-417a-9f7e-07308c56e445 00:06:10.979 Node size: 16384 00:06:10.979 Sector size: 4096 00:06:10.979 Filesystem size: 510.00MiB 00:06:10.979 Block group profiles: 00:06:10.979 Data: single 8.00MiB 00:06:10.979 Metadata: DUP 32.00MiB 00:06:10.979 System: DUP 8.00MiB 00:06:10.979 SSD detected: yes 00:06:10.979 Zoned device: no 00:06:10.979 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:10.979 Runtime features: free-space-tree 00:06:10.979 Checksum: crc32c 00:06:10.979 Number of devices: 1 00:06:10.979 Devices: 00:06:10.979 ID SIZE PATH 00:06:10.979 1 510.00MiB /dev/nvme0n1p1 00:06:10.979 00:06:10.979 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:10.979 15:58:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 681040 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:11.238 00:06:11.238 real 0m0.528s 00:06:11.238 user 0m0.019s 00:06:11.238 sys 0m0.114s 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:11.238 ************************************ 00:06:11.238 END TEST filesystem_btrfs 00:06:11.238 ************************************ 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:11.238 ************************************ 00:06:11.238 START TEST filesystem_xfs 00:06:11.238 ************************************ 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:11.238 15:58:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:11.495 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:11.495 = sectsz=512 attr=2, projid32bit=1 00:06:11.495 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:11.496 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:11.496 data = bsize=4096 blocks=130560, imaxpct=25 00:06:11.496 = sunit=0 swidth=0 blks 00:06:11.496 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:11.496 log =internal log bsize=4096 blocks=16384, version=2 00:06:11.496 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:11.496 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:12.427 Discarding blocks...Done. 00:06:12.427 15:58:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:12.427 15:58:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:14.328 15:58:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:14.328 15:58:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:14.328 15:58:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:14.328 15:58:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:14.328 15:58:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:14.328 15:58:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:14.328 15:58:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 681040 00:06:14.328 15:58:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:14.328 15:58:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:14.328 00:06:14.328 real 0m2.783s 00:06:14.328 user 0m0.011s 00:06:14.328 sys 0m0.067s 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:14.328 ************************************ 00:06:14.328 END TEST filesystem_xfs 00:06:14.328 ************************************ 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:14.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 681040 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 681040 ']' 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 681040 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 681040 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 681040' 00:06:14.328 killing process with pid 681040 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 681040 00:06:14.328 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 681040 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:14.899 00:06:14.899 real 0m11.718s 00:06:14.899 user 0m44.772s 00:06:14.899 sys 0m1.788s 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.899 ************************************ 00:06:14.899 END TEST nvmf_filesystem_no_in_capsule 00:06:14.899 ************************************ 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:14.899 ************************************ 00:06:14.899 START TEST nvmf_filesystem_in_capsule 00:06:14.899 ************************************ 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=682660 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 682660 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 682660 ']' 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.899 15:59:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.899 [2024-07-15 15:59:00.748301] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:06:14.899 [2024-07-15 15:59:00.748377] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.899 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.899 [2024-07-15 15:59:00.820041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.159 [2024-07-15 15:59:00.931459] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:15.159 [2024-07-15 15:59:00.931539] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:15.159 [2024-07-15 15:59:00.931567] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.159 [2024-07-15 15:59:00.931578] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.159 [2024-07-15 15:59:00.931588] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:15.159 [2024-07-15 15:59:00.932013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.159 [2024-07-15 15:59:00.932071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.159 [2024-07-15 15:59:00.932046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.159 [2024-07-15 15:59:00.932074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.159 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.159 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:15.159 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:15.159 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:15.160 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.160 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.160 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:15.160 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:15.160 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.160 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.160 [2024-07-15 15:59:01.080816] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.160 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.160 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:15.160 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.160 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.429 Malloc1 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.429 [2024-07-15 15:59:01.268063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:15.429 { 00:06:15.429 "name": "Malloc1", 00:06:15.429 "aliases": [ 00:06:15.429 "6c786d82-3b23-422c-965c-076f85ded36c" 00:06:15.429 ], 00:06:15.429 "product_name": "Malloc disk", 00:06:15.429 "block_size": 512, 00:06:15.429 "num_blocks": 1048576, 00:06:15.429 "uuid": "6c786d82-3b23-422c-965c-076f85ded36c", 00:06:15.429 "assigned_rate_limits": { 00:06:15.429 "rw_ios_per_sec": 0, 00:06:15.429 "rw_mbytes_per_sec": 0, 00:06:15.429 "r_mbytes_per_sec": 0, 00:06:15.429 "w_mbytes_per_sec": 0 00:06:15.429 }, 00:06:15.429 "claimed": true, 00:06:15.429 "claim_type": "exclusive_write", 00:06:15.429 "zoned": false, 00:06:15.429 "supported_io_types": { 00:06:15.429 "read": true, 00:06:15.429 "write": true, 00:06:15.429 "unmap": true, 00:06:15.429 "flush": true, 00:06:15.429 "reset": true, 00:06:15.429 "nvme_admin": false, 00:06:15.429 "nvme_io": false, 00:06:15.429 "nvme_io_md": false, 00:06:15.429 "write_zeroes": true, 00:06:15.429 "zcopy": true, 00:06:15.429 "get_zone_info": false, 00:06:15.429 "zone_management": false, 00:06:15.429 "zone_append": false, 00:06:15.429 "compare": false, 00:06:15.429 "compare_and_write": false, 00:06:15.429 "abort": true, 00:06:15.429 "seek_hole": false, 00:06:15.429 "seek_data": false, 00:06:15.429 "copy": true, 00:06:15.429 "nvme_iov_md": false 00:06:15.429 }, 00:06:15.429 "memory_domains": [ 00:06:15.429 { 00:06:15.429 "dma_device_id": "system", 00:06:15.429 "dma_device_type": 1 00:06:15.429 }, 00:06:15.429 { 00:06:15.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.429 "dma_device_type": 2 00:06:15.429 } 00:06:15.429 ], 00:06:15.429 "driver_specific": {} 00:06:15.429 } 00:06:15.429 ]' 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:15.429 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:15.430 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:15.430 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:15.430 15:59:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:16.376 15:59:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:16.376 15:59:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:16.376 15:59:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:16.376 15:59:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:16.376 15:59:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:18.277 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:18.277 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:18.277 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:18.277 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:18.277 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:18.278 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:18.278 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:18.278 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:18.278 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:18.278 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:18.278 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:18.278 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:18.278 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:18.278 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:18.278 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:18.278 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:18.278 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:18.535 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:19.101 15:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:20.037 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:20.037 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:20.037 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:20.037 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.037 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.037 ************************************ 00:06:20.037 START TEST filesystem_in_capsule_ext4 00:06:20.037 ************************************ 00:06:20.037 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:20.037 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:20.037 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:20.037 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:20.037 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:20.037 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:20.037 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:20.037 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:20.037 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:20.037 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:20.037 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:20.037 mke2fs 1.46.5 (30-Dec-2021) 00:06:20.037 Discarding device blocks: 0/522240 done 00:06:20.037 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:20.037 Filesystem UUID: f80e3d15-2119-4394-a802-61d09812d87e 00:06:20.037 Superblock backups stored on blocks: 00:06:20.037 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:20.037 00:06:20.037 Allocating group tables: 0/64 done 00:06:20.037 Writing inode tables: 0/64 done 00:06:20.295 Creating journal (8192 blocks): done 00:06:20.295 Writing superblocks and filesystem accounting information: 0/64 done 00:06:20.295 00:06:20.295 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:20.295 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:20.295 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 682660 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:20.553 00:06:20.553 real 0m0.555s 00:06:20.553 user 0m0.020s 00:06:20.553 sys 0m0.051s 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:20.553 ************************************ 00:06:20.553 END TEST filesystem_in_capsule_ext4 00:06:20.553 ************************************ 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.553 ************************************ 00:06:20.553 START TEST filesystem_in_capsule_btrfs 00:06:20.553 ************************************ 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:20.553 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:20.554 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:20.554 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:20.554 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:20.554 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:20.554 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:20.811 btrfs-progs v6.6.2 00:06:20.811 See https://btrfs.readthedocs.io for more information. 00:06:20.811 00:06:20.811 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:20.811 NOTE: several default settings have changed in version 5.15, please make sure 00:06:20.811 this does not affect your deployments: 00:06:20.811 - DUP for metadata (-m dup) 00:06:20.811 - enabled no-holes (-O no-holes) 00:06:20.811 - enabled free-space-tree (-R free-space-tree) 00:06:20.811 00:06:20.811 Label: (null) 00:06:20.811 UUID: b0613882-b873-4983-815c-987108b8fda3 00:06:20.811 Node size: 16384 00:06:20.811 Sector size: 4096 00:06:20.811 Filesystem size: 510.00MiB 00:06:20.811 Block group profiles: 00:06:20.811 Data: single 8.00MiB 00:06:20.811 Metadata: DUP 32.00MiB 00:06:20.811 System: DUP 8.00MiB 00:06:20.811 SSD detected: yes 00:06:20.811 Zoned device: no 00:06:20.811 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:20.811 Runtime features: free-space-tree 00:06:20.811 Checksum: crc32c 00:06:20.811 Number of devices: 1 00:06:20.811 Devices: 00:06:20.811 ID SIZE PATH 00:06:20.811 1 510.00MiB /dev/nvme0n1p1 00:06:20.811 00:06:20.811 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:20.811 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 682660 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:21.380 00:06:21.380 real 0m0.856s 00:06:21.380 user 0m0.014s 00:06:21.380 sys 0m0.117s 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:21.380 ************************************ 00:06:21.380 END TEST filesystem_in_capsule_btrfs 00:06:21.380 ************************************ 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.380 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:21.380 ************************************ 00:06:21.380 START TEST filesystem_in_capsule_xfs 00:06:21.380 ************************************ 00:06:21.381 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:21.381 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:21.381 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:21.381 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:21.381 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:21.381 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:21.381 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:21.381 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:21.381 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:21.381 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:21.381 15:59:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:21.639 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:21.639 = sectsz=512 attr=2, projid32bit=1 00:06:21.639 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:21.639 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:21.639 data = bsize=4096 blocks=130560, imaxpct=25 00:06:21.639 = sunit=0 swidth=0 blks 00:06:21.639 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:21.639 log =internal log bsize=4096 blocks=16384, version=2 00:06:21.639 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:21.639 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:22.576 Discarding blocks...Done. 00:06:22.576 15:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:22.576 15:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:24.479 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:24.479 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:24.479 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:24.479 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:24.479 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:24.479 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:24.479 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 682660 00:06:24.479 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:24.479 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:24.479 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:24.479 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:24.479 00:06:24.479 real 0m3.121s 00:06:24.479 user 0m0.012s 00:06:24.479 sys 0m0.061s 00:06:24.479 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.479 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:24.479 ************************************ 00:06:24.479 END TEST filesystem_in_capsule_xfs 00:06:24.479 ************************************ 00:06:24.479 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:24.479 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:24.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 682660 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 682660 ']' 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 682660 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 682660 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 682660' 00:06:24.739 killing process with pid 682660 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 682660 00:06:24.739 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 682660 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:25.309 00:06:25.309 real 0m10.428s 00:06:25.309 user 0m39.736s 00:06:25.309 sys 0m1.636s 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.309 ************************************ 00:06:25.309 END TEST nvmf_filesystem_in_capsule 00:06:25.309 ************************************ 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:25.309 rmmod nvme_tcp 00:06:25.309 rmmod nvme_fabrics 00:06:25.309 rmmod nvme_keyring 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:25.309 15:59:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.845 15:59:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:27.845 00:06:27.845 real 0m26.745s 00:06:27.845 user 1m25.466s 00:06:27.845 sys 0m5.067s 00:06:27.845 15:59:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.845 15:59:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.845 ************************************ 00:06:27.845 END TEST nvmf_filesystem 00:06:27.845 ************************************ 00:06:27.845 15:59:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:27.845 15:59:13 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:27.845 15:59:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:27.845 15:59:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.845 15:59:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.845 ************************************ 00:06:27.845 START TEST nvmf_target_discovery 00:06:27.845 ************************************ 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:27.845 * Looking for test storage... 00:06:27.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:27.845 15:59:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:27.846 15:59:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:27.846 15:59:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:27.846 15:59:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:27.846 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:27.846 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.846 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:27.846 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:27.846 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:27.846 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.846 15:59:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:27.846 15:59:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.846 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:27.846 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:27.846 15:59:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:27.846 15:59:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:29.748 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:29.748 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:29.748 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:29.749 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:29.749 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:29.749 Found net devices under 0000:09:00.0: cvl_0_0 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:29.749 Found net devices under 0000:09:00.1: cvl_0_1 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:29.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:29.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:06:29.749 00:06:29.749 --- 10.0.0.2 ping statistics --- 00:06:29.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.749 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:29.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:29.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:06:29.749 00:06:29.749 --- 10.0.0.1 ping statistics --- 00:06:29.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.749 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=686006 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 686006 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 686006 ']' 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.749 15:59:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.750 15:59:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:29.750 [2024-07-15 15:59:15.654523] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:06:29.750 [2024-07-15 15:59:15.654617] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.750 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.750 [2024-07-15 15:59:15.722013] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.007 [2024-07-15 15:59:15.830414] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:30.007 [2024-07-15 15:59:15.830478] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:30.007 [2024-07-15 15:59:15.830507] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:30.007 [2024-07-15 15:59:15.830518] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:30.007 [2024-07-15 15:59:15.830528] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:30.007 [2024-07-15 15:59:15.830580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.007 [2024-07-15 15:59:15.830608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.007 [2024-07-15 15:59:15.830666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.007 [2024-07-15 15:59:15.830668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.007 15:59:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.007 15:59:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:30.007 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:30.007 15:59:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:30.007 15:59:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.007 15:59:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:30.007 15:59:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:30.007 15:59:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.007 15:59:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.007 [2024-07-15 15:59:15.990839] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.007 15:59:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.008 15:59:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:30.008 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:30.008 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:30.008 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.008 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 Null1 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 [2024-07-15 15:59:16.031201] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 Null2 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 Null3 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 Null4 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.268 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:06:30.528 00:06:30.528 Discovery Log Number of Records 6, Generation counter 6 00:06:30.528 =====Discovery Log Entry 0====== 00:06:30.528 trtype: tcp 00:06:30.528 adrfam: ipv4 00:06:30.528 subtype: current discovery subsystem 00:06:30.528 treq: not required 00:06:30.528 portid: 0 00:06:30.528 trsvcid: 4420 00:06:30.528 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:30.528 traddr: 10.0.0.2 00:06:30.528 eflags: explicit discovery connections, duplicate discovery information 00:06:30.528 sectype: none 00:06:30.528 =====Discovery Log Entry 1====== 00:06:30.528 trtype: tcp 00:06:30.528 adrfam: ipv4 00:06:30.528 subtype: nvme subsystem 00:06:30.528 treq: not required 00:06:30.528 portid: 0 00:06:30.528 trsvcid: 4420 00:06:30.528 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:30.528 traddr: 10.0.0.2 00:06:30.528 eflags: none 00:06:30.528 sectype: none 00:06:30.528 =====Discovery Log Entry 2====== 00:06:30.528 trtype: tcp 00:06:30.528 adrfam: ipv4 00:06:30.528 subtype: nvme subsystem 00:06:30.528 treq: not required 00:06:30.528 portid: 0 00:06:30.528 trsvcid: 4420 00:06:30.528 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:30.528 traddr: 10.0.0.2 00:06:30.528 eflags: none 00:06:30.528 sectype: none 00:06:30.528 =====Discovery Log Entry 3====== 00:06:30.528 trtype: tcp 00:06:30.528 adrfam: ipv4 00:06:30.528 subtype: nvme subsystem 00:06:30.528 treq: not required 00:06:30.528 portid: 0 00:06:30.528 trsvcid: 4420 00:06:30.528 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:30.528 traddr: 10.0.0.2 00:06:30.528 eflags: none 00:06:30.528 sectype: none 00:06:30.528 =====Discovery Log Entry 4====== 00:06:30.528 trtype: tcp 00:06:30.528 adrfam: ipv4 00:06:30.528 subtype: nvme subsystem 00:06:30.528 treq: not required 00:06:30.528 portid: 0 00:06:30.528 trsvcid: 4420 00:06:30.528 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:30.528 traddr: 10.0.0.2 00:06:30.528 eflags: none 00:06:30.528 sectype: none 00:06:30.528 =====Discovery Log Entry 5====== 00:06:30.528 trtype: tcp 00:06:30.528 adrfam: ipv4 00:06:30.528 subtype: discovery subsystem referral 00:06:30.528 treq: not required 00:06:30.528 portid: 0 00:06:30.528 trsvcid: 4430 00:06:30.528 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:30.528 traddr: 10.0.0.2 00:06:30.528 eflags: none 00:06:30.528 sectype: none 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:30.528 Perform nvmf subsystem discovery via RPC 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.528 [ 00:06:30.528 { 00:06:30.528 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:30.528 "subtype": "Discovery", 00:06:30.528 "listen_addresses": [ 00:06:30.528 { 00:06:30.528 "trtype": "TCP", 00:06:30.528 "adrfam": "IPv4", 00:06:30.528 "traddr": "10.0.0.2", 00:06:30.528 "trsvcid": "4420" 00:06:30.528 } 00:06:30.528 ], 00:06:30.528 "allow_any_host": true, 00:06:30.528 "hosts": [] 00:06:30.528 }, 00:06:30.528 { 00:06:30.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:30.528 "subtype": "NVMe", 00:06:30.528 "listen_addresses": [ 00:06:30.528 { 00:06:30.528 "trtype": "TCP", 00:06:30.528 "adrfam": "IPv4", 00:06:30.528 "traddr": "10.0.0.2", 00:06:30.528 "trsvcid": "4420" 00:06:30.528 } 00:06:30.528 ], 00:06:30.528 "allow_any_host": true, 00:06:30.528 "hosts": [], 00:06:30.528 "serial_number": "SPDK00000000000001", 00:06:30.528 "model_number": "SPDK bdev Controller", 00:06:30.528 "max_namespaces": 32, 00:06:30.528 "min_cntlid": 1, 00:06:30.528 "max_cntlid": 65519, 00:06:30.528 "namespaces": [ 00:06:30.528 { 00:06:30.528 "nsid": 1, 00:06:30.528 "bdev_name": "Null1", 00:06:30.528 "name": "Null1", 00:06:30.528 "nguid": "A338FE0F085B41749AACD37E488AD14A", 00:06:30.528 "uuid": "a338fe0f-085b-4174-9aac-d37e488ad14a" 00:06:30.528 } 00:06:30.528 ] 00:06:30.528 }, 00:06:30.528 { 00:06:30.528 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:30.528 "subtype": "NVMe", 00:06:30.528 "listen_addresses": [ 00:06:30.528 { 00:06:30.528 "trtype": "TCP", 00:06:30.528 "adrfam": "IPv4", 00:06:30.528 "traddr": "10.0.0.2", 00:06:30.528 "trsvcid": "4420" 00:06:30.528 } 00:06:30.528 ], 00:06:30.528 "allow_any_host": true, 00:06:30.528 "hosts": [], 00:06:30.528 "serial_number": "SPDK00000000000002", 00:06:30.528 "model_number": "SPDK bdev Controller", 00:06:30.528 "max_namespaces": 32, 00:06:30.528 "min_cntlid": 1, 00:06:30.528 "max_cntlid": 65519, 00:06:30.528 "namespaces": [ 00:06:30.528 { 00:06:30.528 "nsid": 1, 00:06:30.528 "bdev_name": "Null2", 00:06:30.528 "name": "Null2", 00:06:30.528 "nguid": "D309F7589C6E4A4B82E396F41932BD11", 00:06:30.528 "uuid": "d309f758-9c6e-4a4b-82e3-96f41932bd11" 00:06:30.528 } 00:06:30.528 ] 00:06:30.528 }, 00:06:30.528 { 00:06:30.528 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:30.528 "subtype": "NVMe", 00:06:30.528 "listen_addresses": [ 00:06:30.528 { 00:06:30.528 "trtype": "TCP", 00:06:30.528 "adrfam": "IPv4", 00:06:30.528 "traddr": "10.0.0.2", 00:06:30.528 "trsvcid": "4420" 00:06:30.528 } 00:06:30.528 ], 00:06:30.528 "allow_any_host": true, 00:06:30.528 "hosts": [], 00:06:30.528 "serial_number": "SPDK00000000000003", 00:06:30.528 "model_number": "SPDK bdev Controller", 00:06:30.528 "max_namespaces": 32, 00:06:30.528 "min_cntlid": 1, 00:06:30.528 "max_cntlid": 65519, 00:06:30.528 "namespaces": [ 00:06:30.528 { 00:06:30.528 "nsid": 1, 00:06:30.528 "bdev_name": "Null3", 00:06:30.528 "name": "Null3", 00:06:30.528 "nguid": "90257AC07AB84AB2AC862B16B9986A50", 00:06:30.528 "uuid": "90257ac0-7ab8-4ab2-ac86-2b16b9986a50" 00:06:30.528 } 00:06:30.528 ] 00:06:30.528 }, 00:06:30.528 { 00:06:30.528 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:30.528 "subtype": "NVMe", 00:06:30.528 "listen_addresses": [ 00:06:30.528 { 00:06:30.528 "trtype": "TCP", 00:06:30.528 "adrfam": "IPv4", 00:06:30.528 "traddr": "10.0.0.2", 00:06:30.528 "trsvcid": "4420" 00:06:30.528 } 00:06:30.528 ], 00:06:30.528 "allow_any_host": true, 00:06:30.528 "hosts": [], 00:06:30.528 "serial_number": "SPDK00000000000004", 00:06:30.528 "model_number": "SPDK bdev Controller", 00:06:30.528 "max_namespaces": 32, 00:06:30.528 "min_cntlid": 1, 00:06:30.528 "max_cntlid": 65519, 00:06:30.528 "namespaces": [ 00:06:30.528 { 00:06:30.528 "nsid": 1, 00:06:30.528 "bdev_name": "Null4", 00:06:30.528 "name": "Null4", 00:06:30.528 "nguid": "61856B40F0904B40BB304448E1A3F10D", 00:06:30.528 "uuid": "61856b40-f090-4b40-bb30-4448e1a3f10d" 00:06:30.528 } 00:06:30.528 ] 00:06:30.528 } 00:06:30.528 ] 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.528 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:30.529 15:59:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:30.529 rmmod nvme_tcp 00:06:30.529 rmmod nvme_fabrics 00:06:30.789 rmmod nvme_keyring 00:06:30.789 15:59:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:30.789 15:59:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:30.789 15:59:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:30.789 15:59:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 686006 ']' 00:06:30.789 15:59:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 686006 00:06:30.789 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 686006 ']' 00:06:30.789 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 686006 00:06:30.789 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:30.789 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.789 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 686006 00:06:30.789 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.789 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.789 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 686006' 00:06:30.789 killing process with pid 686006 00:06:30.789 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 686006 00:06:30.789 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 686006 00:06:31.049 15:59:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:31.049 15:59:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:31.049 15:59:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:31.049 15:59:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:31.049 15:59:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:31.049 15:59:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.049 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:31.049 15:59:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.954 15:59:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:32.954 00:06:32.954 real 0m5.601s 00:06:32.954 user 0m4.826s 00:06:32.954 sys 0m1.844s 00:06:32.954 15:59:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.954 15:59:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.954 ************************************ 00:06:32.954 END TEST nvmf_target_discovery 00:06:32.954 ************************************ 00:06:32.954 15:59:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:32.954 15:59:18 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:32.954 15:59:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:32.954 15:59:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.954 15:59:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.954 ************************************ 00:06:32.954 START TEST nvmf_referrals 00:06:32.954 ************************************ 00:06:32.954 15:59:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:33.212 * Looking for test storage... 00:06:33.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:33.212 15:59:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:35.788 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:35.788 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:35.788 Found net devices under 0000:09:00.0: cvl_0_0 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:35.788 Found net devices under 0000:09:00.1: cvl_0_1 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:35.788 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:35.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:35.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:06:35.789 00:06:35.789 --- 10.0.0.2 ping statistics --- 00:06:35.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.789 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:35.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:35.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:06:35.789 00:06:35.789 --- 10.0.0.1 ping statistics --- 00:06:35.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.789 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=688092 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 688092 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 688092 ']' 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:35.789 [2024-07-15 15:59:21.403712] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:06:35.789 [2024-07-15 15:59:21.403791] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.789 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.789 [2024-07-15 15:59:21.466572] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.789 [2024-07-15 15:59:21.569220] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:35.789 [2024-07-15 15:59:21.569282] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:35.789 [2024-07-15 15:59:21.569296] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.789 [2024-07-15 15:59:21.569306] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.789 [2024-07-15 15:59:21.569315] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:35.789 [2024-07-15 15:59:21.569396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.789 [2024-07-15 15:59:21.569470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.789 [2024-07-15 15:59:21.569530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.789 [2024-07-15 15:59:21.569533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:35.789 [2024-07-15 15:59:21.718599] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:35.789 [2024-07-15 15:59:21.730827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:35.789 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.050 15:59:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.050 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.050 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:36.050 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.050 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.050 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.050 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:36.050 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.050 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.050 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.050 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:36.050 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:36.050 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.050 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.050 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:36.308 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:36.566 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:36.566 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:36.566 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:36.566 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:36.566 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:36.566 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:36.566 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:36.566 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:36.566 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:36.566 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:36.566 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:36.566 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:36.566 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:36.824 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:37.083 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:37.083 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:37.083 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:37.083 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:37.083 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:37.083 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:37.083 15:59:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:37.083 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:37.083 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:37.083 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:37.083 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:37.083 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:37.083 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:37.342 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:37.342 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:37.342 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.342 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:37.342 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.342 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:37.342 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.342 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:37.342 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:37.342 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.342 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:37.342 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:37.342 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:37.342 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:37.342 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:37.342 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:37.342 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:37.602 rmmod nvme_tcp 00:06:37.602 rmmod nvme_fabrics 00:06:37.602 rmmod nvme_keyring 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 688092 ']' 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 688092 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 688092 ']' 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 688092 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 688092 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 688092' 00:06:37.602 killing process with pid 688092 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 688092 00:06:37.602 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 688092 00:06:37.862 15:59:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:37.862 15:59:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:37.862 15:59:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:37.862 15:59:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:37.862 15:59:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:37.862 15:59:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.862 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:37.862 15:59:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.772 15:59:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:39.772 00:06:39.772 real 0m6.807s 00:06:39.772 user 0m9.850s 00:06:39.772 sys 0m2.216s 00:06:39.772 15:59:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.772 15:59:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.772 ************************************ 00:06:39.772 END TEST nvmf_referrals 00:06:39.772 ************************************ 00:06:40.030 15:59:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:40.030 15:59:25 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:40.030 15:59:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:40.030 15:59:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.030 15:59:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.030 ************************************ 00:06:40.030 START TEST nvmf_connect_disconnect 00:06:40.030 ************************************ 00:06:40.030 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:40.030 * Looking for test storage... 00:06:40.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:40.030 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:40.031 15:59:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:42.562 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:42.562 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:42.562 Found net devices under 0000:09:00.0: cvl_0_0 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:42.562 Found net devices under 0000:09:00.1: cvl_0_1 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.562 15:59:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:42.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:06:42.562 00:06:42.562 --- 10.0.0.2 ping statistics --- 00:06:42.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.562 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:06:42.562 00:06:42.562 --- 10.0.0.1 ping statistics --- 00:06:42.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.562 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=690391 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 690391 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 690391 ']' 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.562 [2024-07-15 15:59:28.166324] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:06:42.562 [2024-07-15 15:59:28.166415] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.562 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.562 [2024-07-15 15:59:28.231710] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.562 [2024-07-15 15:59:28.342842] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.562 [2024-07-15 15:59:28.342903] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.562 [2024-07-15 15:59:28.342916] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.562 [2024-07-15 15:59:28.342926] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.562 [2024-07-15 15:59:28.342951] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.562 [2024-07-15 15:59:28.343048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.562 [2024-07-15 15:59:28.343113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.562 [2024-07-15 15:59:28.343179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.562 [2024-07-15 15:59:28.343181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.562 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.563 [2024-07-15 15:59:28.502821] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.563 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.563 [2024-07-15 15:59:28.559889] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:42.821 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.821 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:42.821 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:42.821 15:59:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:45.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:48.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:51.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:53.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:57.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:57.022 rmmod nvme_tcp 00:06:57.022 rmmod nvme_fabrics 00:06:57.022 rmmod nvme_keyring 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 690391 ']' 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 690391 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 690391 ']' 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 690391 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 690391 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 690391' 00:06:57.022 killing process with pid 690391 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 690391 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 690391 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:57.022 15:59:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.960 15:59:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:58.960 00:06:58.960 real 0m19.010s 00:06:58.960 user 0m57.028s 00:06:58.960 sys 0m3.421s 00:06:58.960 15:59:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.960 15:59:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:58.960 ************************************ 00:06:58.960 END TEST nvmf_connect_disconnect 00:06:58.960 ************************************ 00:06:58.960 15:59:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:58.960 15:59:44 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:58.960 15:59:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:58.960 15:59:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.960 15:59:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.961 ************************************ 00:06:58.961 START TEST nvmf_multitarget 00:06:58.961 ************************************ 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:58.961 * Looking for test storage... 00:06:58.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:06:58.961 15:59:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:01.501 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:01.501 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:01.501 Found net devices under 0000:09:00.0: cvl_0_0 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:01.501 Found net devices under 0000:09:00.1: cvl_0_1 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:01.501 15:59:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:01.501 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:01.501 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:01.501 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:01.501 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:01.501 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:01.501 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:01.501 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:01.501 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:01.501 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:01.501 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:01.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:01.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:07:01.501 00:07:01.501 --- 10.0.0.2 ping statistics --- 00:07:01.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.501 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:07:01.501 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:01.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:01.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:07:01.501 00:07:01.501 --- 10.0.0.1 ping statistics --- 00:07:01.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.501 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:07:01.501 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:01.501 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:01.501 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=694092 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 694092 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 694092 ']' 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:01.502 [2024-07-15 15:59:47.203444] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:07:01.502 [2024-07-15 15:59:47.203530] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.502 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.502 [2024-07-15 15:59:47.269128] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.502 [2024-07-15 15:59:47.373646] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.502 [2024-07-15 15:59:47.373700] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.502 [2024-07-15 15:59:47.373728] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.502 [2024-07-15 15:59:47.373738] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.502 [2024-07-15 15:59:47.373747] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.502 [2024-07-15 15:59:47.373823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.502 [2024-07-15 15:59:47.373904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.502 [2024-07-15 15:59:47.374021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.502 [2024-07-15 15:59:47.374025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:01.502 15:59:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:01.760 15:59:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.760 15:59:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:01.760 15:59:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:01.760 15:59:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:01.760 15:59:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:01.760 15:59:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:01.760 "nvmf_tgt_1" 00:07:01.760 15:59:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:02.018 "nvmf_tgt_2" 00:07:02.018 15:59:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:02.018 15:59:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:02.018 15:59:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:02.018 15:59:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:02.275 true 00:07:02.276 15:59:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:02.276 true 00:07:02.276 15:59:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:02.276 15:59:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:02.535 rmmod nvme_tcp 00:07:02.535 rmmod nvme_fabrics 00:07:02.535 rmmod nvme_keyring 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 694092 ']' 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 694092 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 694092 ']' 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 694092 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 694092 00:07:02.535 15:59:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.536 15:59:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.536 15:59:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 694092' 00:07:02.536 killing process with pid 694092 00:07:02.536 15:59:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 694092 00:07:02.536 15:59:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 694092 00:07:02.794 15:59:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:02.794 15:59:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:02.794 15:59:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:02.794 15:59:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:02.794 15:59:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:02.794 15:59:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.794 15:59:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.794 15:59:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.701 15:59:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:04.701 00:07:04.701 real 0m5.828s 00:07:04.701 user 0m6.469s 00:07:04.701 sys 0m1.947s 00:07:04.701 15:59:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.701 15:59:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:04.701 ************************************ 00:07:04.701 END TEST nvmf_multitarget 00:07:04.701 ************************************ 00:07:04.961 15:59:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:04.961 15:59:50 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:04.961 15:59:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:04.961 15:59:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.961 15:59:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:04.961 ************************************ 00:07:04.961 START TEST nvmf_rpc 00:07:04.961 ************************************ 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:04.961 * Looking for test storage... 00:07:04.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:04.961 15:59:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:07.493 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:07.493 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:07.493 Found net devices under 0000:09:00.0: cvl_0_0 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:07.493 Found net devices under 0000:09:00.1: cvl_0_1 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:07.493 15:59:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:07.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:07:07.493 00:07:07.493 --- 10.0.0.2 ping statistics --- 00:07:07.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.493 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:07:07.493 00:07:07.493 --- 10.0.0.1 ping statistics --- 00:07:07.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.493 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.493 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=696288 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 696288 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 696288 ']' 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.494 [2024-07-15 15:59:53.126948] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:07:07.494 [2024-07-15 15:59:53.127060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.494 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.494 [2024-07-15 15:59:53.194075] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.494 [2024-07-15 15:59:53.310072] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.494 [2024-07-15 15:59:53.310122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.494 [2024-07-15 15:59:53.310138] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.494 [2024-07-15 15:59:53.310150] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.494 [2024-07-15 15:59:53.310161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.494 [2024-07-15 15:59:53.310216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.494 [2024-07-15 15:59:53.310283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.494 [2024-07-15 15:59:53.310364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.494 [2024-07-15 15:59:53.310366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:07.494 "tick_rate": 2700000000, 00:07:07.494 "poll_groups": [ 00:07:07.494 { 00:07:07.494 "name": "nvmf_tgt_poll_group_000", 00:07:07.494 "admin_qpairs": 0, 00:07:07.494 "io_qpairs": 0, 00:07:07.494 "current_admin_qpairs": 0, 00:07:07.494 "current_io_qpairs": 0, 00:07:07.494 "pending_bdev_io": 0, 00:07:07.494 "completed_nvme_io": 0, 00:07:07.494 "transports": [] 00:07:07.494 }, 00:07:07.494 { 00:07:07.494 "name": "nvmf_tgt_poll_group_001", 00:07:07.494 "admin_qpairs": 0, 00:07:07.494 "io_qpairs": 0, 00:07:07.494 "current_admin_qpairs": 0, 00:07:07.494 "current_io_qpairs": 0, 00:07:07.494 "pending_bdev_io": 0, 00:07:07.494 "completed_nvme_io": 0, 00:07:07.494 "transports": [] 00:07:07.494 }, 00:07:07.494 { 00:07:07.494 "name": "nvmf_tgt_poll_group_002", 00:07:07.494 "admin_qpairs": 0, 00:07:07.494 "io_qpairs": 0, 00:07:07.494 "current_admin_qpairs": 0, 00:07:07.494 "current_io_qpairs": 0, 00:07:07.494 "pending_bdev_io": 0, 00:07:07.494 "completed_nvme_io": 0, 00:07:07.494 "transports": [] 00:07:07.494 }, 00:07:07.494 { 00:07:07.494 "name": "nvmf_tgt_poll_group_003", 00:07:07.494 "admin_qpairs": 0, 00:07:07.494 "io_qpairs": 0, 00:07:07.494 "current_admin_qpairs": 0, 00:07:07.494 "current_io_qpairs": 0, 00:07:07.494 "pending_bdev_io": 0, 00:07:07.494 "completed_nvme_io": 0, 00:07:07.494 "transports": [] 00:07:07.494 } 00:07:07.494 ] 00:07:07.494 }' 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:07.494 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.752 [2024-07-15 15:59:53.565117] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:07.752 "tick_rate": 2700000000, 00:07:07.752 "poll_groups": [ 00:07:07.752 { 00:07:07.752 "name": "nvmf_tgt_poll_group_000", 00:07:07.752 "admin_qpairs": 0, 00:07:07.752 "io_qpairs": 0, 00:07:07.752 "current_admin_qpairs": 0, 00:07:07.752 "current_io_qpairs": 0, 00:07:07.752 "pending_bdev_io": 0, 00:07:07.752 "completed_nvme_io": 0, 00:07:07.752 "transports": [ 00:07:07.752 { 00:07:07.752 "trtype": "TCP" 00:07:07.752 } 00:07:07.752 ] 00:07:07.752 }, 00:07:07.752 { 00:07:07.752 "name": "nvmf_tgt_poll_group_001", 00:07:07.752 "admin_qpairs": 0, 00:07:07.752 "io_qpairs": 0, 00:07:07.752 "current_admin_qpairs": 0, 00:07:07.752 "current_io_qpairs": 0, 00:07:07.752 "pending_bdev_io": 0, 00:07:07.752 "completed_nvme_io": 0, 00:07:07.752 "transports": [ 00:07:07.752 { 00:07:07.752 "trtype": "TCP" 00:07:07.752 } 00:07:07.752 ] 00:07:07.752 }, 00:07:07.752 { 00:07:07.752 "name": "nvmf_tgt_poll_group_002", 00:07:07.752 "admin_qpairs": 0, 00:07:07.752 "io_qpairs": 0, 00:07:07.752 "current_admin_qpairs": 0, 00:07:07.752 "current_io_qpairs": 0, 00:07:07.752 "pending_bdev_io": 0, 00:07:07.752 "completed_nvme_io": 0, 00:07:07.752 "transports": [ 00:07:07.752 { 00:07:07.752 "trtype": "TCP" 00:07:07.752 } 00:07:07.752 ] 00:07:07.752 }, 00:07:07.752 { 00:07:07.752 "name": "nvmf_tgt_poll_group_003", 00:07:07.752 "admin_qpairs": 0, 00:07:07.752 "io_qpairs": 0, 00:07:07.752 "current_admin_qpairs": 0, 00:07:07.752 "current_io_qpairs": 0, 00:07:07.752 "pending_bdev_io": 0, 00:07:07.752 "completed_nvme_io": 0, 00:07:07.752 "transports": [ 00:07:07.752 { 00:07:07.752 "trtype": "TCP" 00:07:07.752 } 00:07:07.752 ] 00:07:07.752 } 00:07:07.752 ] 00:07:07.752 }' 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.752 Malloc1 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:07.752 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.753 [2024-07-15 15:59:53.730420] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:07.753 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:07:07.753 [2024-07-15 15:59:53.752854] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:07:08.010 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:08.010 could not add new controller: failed to write to nvme-fabrics device 00:07:08.010 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:08.010 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.010 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:08.010 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.010 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:08.010 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.010 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.010 15:59:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.010 15:59:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:08.577 15:59:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:08.577 15:59:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:08.577 15:59:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:08.577 15:59:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:08.577 15:59:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:10.501 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:10.501 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:10.501 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:10.501 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:10.501 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:10.501 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:10.501 15:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:10.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:10.761 15:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:10.761 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:10.761 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:10.761 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.761 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:10.761 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.761 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:10.761 15:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:10.761 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.761 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.761 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:10.762 [2024-07-15 15:59:56.606543] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:07:10.762 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:10.762 could not add new controller: failed to write to nvme-fabrics device 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.762 15:59:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:11.328 15:59:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:11.328 15:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:11.328 15:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:11.328 15:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:11.328 15:59:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:13.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.862 [2024-07-15 15:59:59.413113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.862 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.863 15:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:13.863 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.863 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.863 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.863 15:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.863 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.863 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.863 15:59:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.863 15:59:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:14.120 16:00:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:14.120 16:00:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:14.120 16:00:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:14.120 16:00:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:14.120 16:00:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:16.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.703 [2024-07-15 16:00:02.261422] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.703 16:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.270 16:00:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:17.270 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:17.270 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:17.270 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:17.270 16:00:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:19.170 16:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:19.170 16:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:19.170 16:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:19.170 16:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:19.170 16:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:19.170 16:00:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:19.170 16:00:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:19.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.170 [2024-07-15 16:00:05.075581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.170 16:00:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:19.740 16:00:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:19.740 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:19.740 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:19.740 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:19.740 16:00:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:21.683 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:21.683 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:21.683 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:21.941 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:21.941 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:21.941 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:21.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.942 [2024-07-15 16:00:07.876107] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.942 16:00:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:22.875 16:00:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:22.875 16:00:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:22.875 16:00:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:22.875 16:00:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:22.875 16:00:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:24.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.772 [2024-07-15 16:00:10.679692] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.772 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.773 16:00:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:24.773 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.773 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.773 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.773 16:00:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:24.773 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.773 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.773 16:00:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.773 16:00:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:25.336 16:00:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:25.336 16:00:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:25.336 16:00:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:25.336 16:00:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:25.336 16:00:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:27.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.859 [2024-07-15 16:00:13.438686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.859 [2024-07-15 16:00:13.486743] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.859 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 [2024-07-15 16:00:13.534903] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 [2024-07-15 16:00:13.583090] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 [2024-07-15 16:00:13.631271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:27.860 "tick_rate": 2700000000, 00:07:27.860 "poll_groups": [ 00:07:27.860 { 00:07:27.860 "name": "nvmf_tgt_poll_group_000", 00:07:27.860 "admin_qpairs": 2, 00:07:27.860 "io_qpairs": 84, 00:07:27.860 "current_admin_qpairs": 0, 00:07:27.860 "current_io_qpairs": 0, 00:07:27.860 "pending_bdev_io": 0, 00:07:27.860 "completed_nvme_io": 179, 00:07:27.860 "transports": [ 00:07:27.860 { 00:07:27.860 "trtype": "TCP" 00:07:27.860 } 00:07:27.860 ] 00:07:27.860 }, 00:07:27.860 { 00:07:27.860 "name": "nvmf_tgt_poll_group_001", 00:07:27.860 "admin_qpairs": 2, 00:07:27.860 "io_qpairs": 84, 00:07:27.860 "current_admin_qpairs": 0, 00:07:27.860 "current_io_qpairs": 0, 00:07:27.860 "pending_bdev_io": 0, 00:07:27.860 "completed_nvme_io": 199, 00:07:27.860 "transports": [ 00:07:27.860 { 00:07:27.860 "trtype": "TCP" 00:07:27.860 } 00:07:27.860 ] 00:07:27.860 }, 00:07:27.860 { 00:07:27.860 "name": "nvmf_tgt_poll_group_002", 00:07:27.860 "admin_qpairs": 1, 00:07:27.860 "io_qpairs": 84, 00:07:27.860 "current_admin_qpairs": 0, 00:07:27.860 "current_io_qpairs": 0, 00:07:27.860 "pending_bdev_io": 0, 00:07:27.860 "completed_nvme_io": 204, 00:07:27.860 "transports": [ 00:07:27.860 { 00:07:27.860 "trtype": "TCP" 00:07:27.860 } 00:07:27.860 ] 00:07:27.860 }, 00:07:27.860 { 00:07:27.860 "name": "nvmf_tgt_poll_group_003", 00:07:27.860 "admin_qpairs": 2, 00:07:27.860 "io_qpairs": 84, 00:07:27.860 "current_admin_qpairs": 0, 00:07:27.860 "current_io_qpairs": 0, 00:07:27.860 "pending_bdev_io": 0, 00:07:27.860 "completed_nvme_io": 104, 00:07:27.860 "transports": [ 00:07:27.860 { 00:07:27.860 "trtype": "TCP" 00:07:27.860 } 00:07:27.860 ] 00:07:27.860 } 00:07:27.860 ] 00:07:27.860 }' 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:27.860 rmmod nvme_tcp 00:07:27.860 rmmod nvme_fabrics 00:07:27.860 rmmod nvme_keyring 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 696288 ']' 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 696288 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 696288 ']' 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 696288 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:27.860 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 696288 00:07:27.861 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:27.861 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:27.861 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 696288' 00:07:27.861 killing process with pid 696288 00:07:27.861 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 696288 00:07:27.861 16:00:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 696288 00:07:28.428 16:00:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:28.428 16:00:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:28.428 16:00:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:28.428 16:00:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:28.428 16:00:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:28.428 16:00:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.428 16:00:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.428 16:00:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.329 16:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:30.329 00:07:30.329 real 0m25.432s 00:07:30.329 user 1m22.506s 00:07:30.329 sys 0m4.065s 00:07:30.329 16:00:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.329 16:00:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.329 ************************************ 00:07:30.329 END TEST nvmf_rpc 00:07:30.330 ************************************ 00:07:30.330 16:00:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:30.330 16:00:16 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:30.330 16:00:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:30.330 16:00:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.330 16:00:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.330 ************************************ 00:07:30.330 START TEST nvmf_invalid 00:07:30.330 ************************************ 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:30.330 * Looking for test storage... 00:07:30.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:30.330 16:00:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:32.873 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:32.873 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:32.873 Found net devices under 0000:09:00.0: cvl_0_0 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:32.873 Found net devices under 0000:09:00.1: cvl_0_1 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.873 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:32.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:07:32.874 00:07:32.874 --- 10.0.0.2 ping statistics --- 00:07:32.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.874 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:07:32.874 00:07:32.874 --- 10.0.0.1 ping statistics --- 00:07:32.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.874 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=701405 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 701405 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 701405 ']' 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.874 16:00:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:32.874 [2024-07-15 16:00:18.603838] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:07:32.874 [2024-07-15 16:00:18.603932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.874 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.874 [2024-07-15 16:00:18.669721] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.874 [2024-07-15 16:00:18.780024] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.874 [2024-07-15 16:00:18.780090] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.874 [2024-07-15 16:00:18.780119] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.874 [2024-07-15 16:00:18.780132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.874 [2024-07-15 16:00:18.780142] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.874 [2024-07-15 16:00:18.780194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.874 [2024-07-15 16:00:18.780248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.874 [2024-07-15 16:00:18.780297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.874 [2024-07-15 16:00:18.780300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.131 16:00:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:33.131 16:00:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:33.131 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:33.131 16:00:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:33.131 16:00:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:33.131 16:00:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.131 16:00:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:33.131 16:00:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2160 00:07:33.388 [2024-07-15 16:00:19.153339] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:33.388 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:33.388 { 00:07:33.388 "nqn": "nqn.2016-06.io.spdk:cnode2160", 00:07:33.388 "tgt_name": "foobar", 00:07:33.388 "method": "nvmf_create_subsystem", 00:07:33.388 "req_id": 1 00:07:33.388 } 00:07:33.388 Got JSON-RPC error response 00:07:33.388 response: 00:07:33.388 { 00:07:33.388 "code": -32603, 00:07:33.388 "message": "Unable to find target foobar" 00:07:33.388 }' 00:07:33.388 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:33.388 { 00:07:33.388 "nqn": "nqn.2016-06.io.spdk:cnode2160", 00:07:33.388 "tgt_name": "foobar", 00:07:33.388 "method": "nvmf_create_subsystem", 00:07:33.388 "req_id": 1 00:07:33.388 } 00:07:33.388 Got JSON-RPC error response 00:07:33.388 response: 00:07:33.388 { 00:07:33.388 "code": -32603, 00:07:33.388 "message": "Unable to find target foobar" 00:07:33.388 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:33.388 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:33.388 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32308 00:07:33.645 [2024-07-15 16:00:19.406182] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32308: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:33.645 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:33.645 { 00:07:33.645 "nqn": "nqn.2016-06.io.spdk:cnode32308", 00:07:33.645 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:33.645 "method": "nvmf_create_subsystem", 00:07:33.645 "req_id": 1 00:07:33.645 } 00:07:33.645 Got JSON-RPC error response 00:07:33.645 response: 00:07:33.645 { 00:07:33.645 "code": -32602, 00:07:33.645 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:33.645 }' 00:07:33.645 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:33.645 { 00:07:33.645 "nqn": "nqn.2016-06.io.spdk:cnode32308", 00:07:33.645 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:33.645 "method": "nvmf_create_subsystem", 00:07:33.645 "req_id": 1 00:07:33.645 } 00:07:33.645 Got JSON-RPC error response 00:07:33.645 response: 00:07:33.645 { 00:07:33.645 "code": -32602, 00:07:33.645 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:33.645 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:33.645 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:33.645 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25991 00:07:33.913 [2024-07-15 16:00:19.707131] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25991: invalid model number 'SPDK_Controller' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:33.913 { 00:07:33.913 "nqn": "nqn.2016-06.io.spdk:cnode25991", 00:07:33.913 "model_number": "SPDK_Controller\u001f", 00:07:33.913 "method": "nvmf_create_subsystem", 00:07:33.913 "req_id": 1 00:07:33.913 } 00:07:33.913 Got JSON-RPC error response 00:07:33.913 response: 00:07:33.913 { 00:07:33.913 "code": -32602, 00:07:33.913 "message": "Invalid MN SPDK_Controller\u001f" 00:07:33.913 }' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:33.913 { 00:07:33.913 "nqn": "nqn.2016-06.io.spdk:cnode25991", 00:07:33.913 "model_number": "SPDK_Controller\u001f", 00:07:33.913 "method": "nvmf_create_subsystem", 00:07:33.913 "req_id": 1 00:07:33.913 } 00:07:33.913 Got JSON-RPC error response 00:07:33.913 response: 00:07:33.913 { 00:07:33.913 "code": -32602, 00:07:33.913 "message": "Invalid MN SPDK_Controller\u001f" 00:07:33.913 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ a == \- ]] 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'a3XXGWgMMGb:F}09*p`=h' 00:07:33.913 16:00:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'a3XXGWgMMGb:F}09*p`=h' nqn.2016-06.io.spdk:cnode118 00:07:34.173 [2024-07-15 16:00:20.028265] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode118: invalid serial number 'a3XXGWgMMGb:F}09*p`=h' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:34.173 { 00:07:34.173 "nqn": "nqn.2016-06.io.spdk:cnode118", 00:07:34.173 "serial_number": "a3XXGWgMMGb:F}09*p`=h", 00:07:34.173 "method": "nvmf_create_subsystem", 00:07:34.173 "req_id": 1 00:07:34.173 } 00:07:34.173 Got JSON-RPC error response 00:07:34.173 response: 00:07:34.173 { 00:07:34.173 "code": -32602, 00:07:34.173 "message": "Invalid SN a3XXGWgMMGb:F}09*p`=h" 00:07:34.173 }' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:34.173 { 00:07:34.173 "nqn": "nqn.2016-06.io.spdk:cnode118", 00:07:34.173 "serial_number": "a3XXGWgMMGb:F}09*p`=h", 00:07:34.173 "method": "nvmf_create_subsystem", 00:07:34.173 "req_id": 1 00:07:34.173 } 00:07:34.173 Got JSON-RPC error response 00:07:34.173 response: 00:07:34.173 { 00:07:34.173 "code": -32602, 00:07:34.173 "message": "Invalid SN a3XXGWgMMGb:F}09*p`=h" 00:07:34.173 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:07:34.173 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ` == \- ]] 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '`zgluv">EcnZ`#tfA!-:Uhl#\ppgR7G?-^A3v%[' 00:07:34.174 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '`zgluv">EcnZ`#tfA!-:Uhl#\ppgR7G?-^A3v%[' nqn.2016-06.io.spdk:cnode27358 00:07:34.431 [2024-07-15 16:00:20.405514] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27358: invalid model number '`zgluv">EcnZ`#tfA!-:Uhl#\ppgR7G?-^A3v%[' 00:07:34.431 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:07:34.432 { 00:07:34.432 "nqn": "nqn.2016-06.io.spdk:cnode27358", 00:07:34.432 "model_number": "`zgluv\">EcnZ`#t\u007ffA!-:Uhl#\\ppgR7G?\u007f-^A3v%[", 00:07:34.432 "method": "nvmf_create_subsystem", 00:07:34.432 "req_id": 1 00:07:34.432 } 00:07:34.432 Got JSON-RPC error response 00:07:34.432 response: 00:07:34.432 { 00:07:34.432 "code": -32602, 00:07:34.432 "message": "Invalid MN `zgluv\">EcnZ`#t\u007ffA!-:Uhl#\\ppgR7G?\u007f-^A3v%[" 00:07:34.432 }' 00:07:34.432 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:07:34.432 { 00:07:34.432 "nqn": "nqn.2016-06.io.spdk:cnode27358", 00:07:34.432 "model_number": "`zgluv\">EcnZ`#t\u007ffA!-:Uhl#\\ppgR7G?\u007f-^A3v%[", 00:07:34.432 "method": "nvmf_create_subsystem", 00:07:34.432 "req_id": 1 00:07:34.432 } 00:07:34.432 Got JSON-RPC error response 00:07:34.432 response: 00:07:34.432 { 00:07:34.432 "code": -32602, 00:07:34.432 "message": "Invalid MN `zgluv\">EcnZ`#t\u007ffA!-:Uhl#\\ppgR7G?\u007f-^A3v%[" 00:07:34.432 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:34.432 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:34.689 [2024-07-15 16:00:20.650396] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.689 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:34.945 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:34.945 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:34.945 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:34.945 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:34.945 16:00:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:35.202 [2024-07-15 16:00:21.140070] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:35.202 16:00:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:07:35.202 { 00:07:35.202 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:35.202 "listen_address": { 00:07:35.202 "trtype": "tcp", 00:07:35.202 "traddr": "", 00:07:35.202 "trsvcid": "4421" 00:07:35.202 }, 00:07:35.202 "method": "nvmf_subsystem_remove_listener", 00:07:35.202 "req_id": 1 00:07:35.202 } 00:07:35.202 Got JSON-RPC error response 00:07:35.202 response: 00:07:35.202 { 00:07:35.202 "code": -32602, 00:07:35.202 "message": "Invalid parameters" 00:07:35.202 }' 00:07:35.202 16:00:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:07:35.202 { 00:07:35.202 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:35.202 "listen_address": { 00:07:35.202 "trtype": "tcp", 00:07:35.202 "traddr": "", 00:07:35.202 "trsvcid": "4421" 00:07:35.202 }, 00:07:35.202 "method": "nvmf_subsystem_remove_listener", 00:07:35.202 "req_id": 1 00:07:35.202 } 00:07:35.202 Got JSON-RPC error response 00:07:35.202 response: 00:07:35.202 { 00:07:35.202 "code": -32602, 00:07:35.202 "message": "Invalid parameters" 00:07:35.202 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:35.202 16:00:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30876 -i 0 00:07:35.459 [2024-07-15 16:00:21.404855] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30876: invalid cntlid range [0-65519] 00:07:35.459 16:00:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:07:35.459 { 00:07:35.459 "nqn": "nqn.2016-06.io.spdk:cnode30876", 00:07:35.459 "min_cntlid": 0, 00:07:35.459 "method": "nvmf_create_subsystem", 00:07:35.459 "req_id": 1 00:07:35.459 } 00:07:35.459 Got JSON-RPC error response 00:07:35.459 response: 00:07:35.459 { 00:07:35.459 "code": -32602, 00:07:35.459 "message": "Invalid cntlid range [0-65519]" 00:07:35.459 }' 00:07:35.459 16:00:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:07:35.459 { 00:07:35.459 "nqn": "nqn.2016-06.io.spdk:cnode30876", 00:07:35.459 "min_cntlid": 0, 00:07:35.459 "method": "nvmf_create_subsystem", 00:07:35.459 "req_id": 1 00:07:35.459 } 00:07:35.459 Got JSON-RPC error response 00:07:35.459 response: 00:07:35.459 { 00:07:35.459 "code": -32602, 00:07:35.459 "message": "Invalid cntlid range [0-65519]" 00:07:35.459 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:35.459 16:00:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4459 -i 65520 00:07:35.715 [2024-07-15 16:00:21.645633] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4459: invalid cntlid range [65520-65519] 00:07:35.715 16:00:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:07:35.715 { 00:07:35.715 "nqn": "nqn.2016-06.io.spdk:cnode4459", 00:07:35.715 "min_cntlid": 65520, 00:07:35.715 "method": "nvmf_create_subsystem", 00:07:35.715 "req_id": 1 00:07:35.715 } 00:07:35.715 Got JSON-RPC error response 00:07:35.715 response: 00:07:35.715 { 00:07:35.715 "code": -32602, 00:07:35.715 "message": "Invalid cntlid range [65520-65519]" 00:07:35.715 }' 00:07:35.715 16:00:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:07:35.715 { 00:07:35.715 "nqn": "nqn.2016-06.io.spdk:cnode4459", 00:07:35.715 "min_cntlid": 65520, 00:07:35.715 "method": "nvmf_create_subsystem", 00:07:35.715 "req_id": 1 00:07:35.715 } 00:07:35.715 Got JSON-RPC error response 00:07:35.715 response: 00:07:35.715 { 00:07:35.715 "code": -32602, 00:07:35.715 "message": "Invalid cntlid range [65520-65519]" 00:07:35.715 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:35.715 16:00:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28983 -I 0 00:07:35.971 [2024-07-15 16:00:21.886484] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28983: invalid cntlid range [1-0] 00:07:35.971 16:00:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:07:35.971 { 00:07:35.971 "nqn": "nqn.2016-06.io.spdk:cnode28983", 00:07:35.971 "max_cntlid": 0, 00:07:35.971 "method": "nvmf_create_subsystem", 00:07:35.971 "req_id": 1 00:07:35.971 } 00:07:35.971 Got JSON-RPC error response 00:07:35.971 response: 00:07:35.971 { 00:07:35.971 "code": -32602, 00:07:35.971 "message": "Invalid cntlid range [1-0]" 00:07:35.971 }' 00:07:35.971 16:00:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:07:35.971 { 00:07:35.971 "nqn": "nqn.2016-06.io.spdk:cnode28983", 00:07:35.971 "max_cntlid": 0, 00:07:35.971 "method": "nvmf_create_subsystem", 00:07:35.971 "req_id": 1 00:07:35.971 } 00:07:35.971 Got JSON-RPC error response 00:07:35.971 response: 00:07:35.971 { 00:07:35.971 "code": -32602, 00:07:35.971 "message": "Invalid cntlid range [1-0]" 00:07:35.971 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:35.971 16:00:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24167 -I 65520 00:07:36.229 [2024-07-15 16:00:22.135275] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24167: invalid cntlid range [1-65520] 00:07:36.229 16:00:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:07:36.229 { 00:07:36.229 "nqn": "nqn.2016-06.io.spdk:cnode24167", 00:07:36.229 "max_cntlid": 65520, 00:07:36.229 "method": "nvmf_create_subsystem", 00:07:36.229 "req_id": 1 00:07:36.229 } 00:07:36.229 Got JSON-RPC error response 00:07:36.229 response: 00:07:36.229 { 00:07:36.229 "code": -32602, 00:07:36.229 "message": "Invalid cntlid range [1-65520]" 00:07:36.229 }' 00:07:36.229 16:00:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:07:36.229 { 00:07:36.229 "nqn": "nqn.2016-06.io.spdk:cnode24167", 00:07:36.229 "max_cntlid": 65520, 00:07:36.229 "method": "nvmf_create_subsystem", 00:07:36.229 "req_id": 1 00:07:36.229 } 00:07:36.229 Got JSON-RPC error response 00:07:36.229 response: 00:07:36.229 { 00:07:36.229 "code": -32602, 00:07:36.229 "message": "Invalid cntlid range [1-65520]" 00:07:36.229 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:36.229 16:00:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6971 -i 6 -I 5 00:07:36.487 [2024-07-15 16:00:22.384109] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6971: invalid cntlid range [6-5] 00:07:36.487 16:00:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:07:36.487 { 00:07:36.487 "nqn": "nqn.2016-06.io.spdk:cnode6971", 00:07:36.487 "min_cntlid": 6, 00:07:36.487 "max_cntlid": 5, 00:07:36.487 "method": "nvmf_create_subsystem", 00:07:36.487 "req_id": 1 00:07:36.487 } 00:07:36.487 Got JSON-RPC error response 00:07:36.487 response: 00:07:36.487 { 00:07:36.487 "code": -32602, 00:07:36.487 "message": "Invalid cntlid range [6-5]" 00:07:36.487 }' 00:07:36.487 16:00:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:07:36.487 { 00:07:36.487 "nqn": "nqn.2016-06.io.spdk:cnode6971", 00:07:36.487 "min_cntlid": 6, 00:07:36.487 "max_cntlid": 5, 00:07:36.487 "method": "nvmf_create_subsystem", 00:07:36.487 "req_id": 1 00:07:36.487 } 00:07:36.487 Got JSON-RPC error response 00:07:36.487 response: 00:07:36.487 { 00:07:36.487 "code": -32602, 00:07:36.487 "message": "Invalid cntlid range [6-5]" 00:07:36.487 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:36.487 16:00:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:36.747 { 00:07:36.747 "name": "foobar", 00:07:36.747 "method": "nvmf_delete_target", 00:07:36.747 "req_id": 1 00:07:36.747 } 00:07:36.747 Got JSON-RPC error response 00:07:36.747 response: 00:07:36.747 { 00:07:36.747 "code": -32602, 00:07:36.747 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:36.747 }' 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:36.747 { 00:07:36.747 "name": "foobar", 00:07:36.747 "method": "nvmf_delete_target", 00:07:36.747 "req_id": 1 00:07:36.747 } 00:07:36.747 Got JSON-RPC error response 00:07:36.747 response: 00:07:36.747 { 00:07:36.747 "code": -32602, 00:07:36.747 "message": "The specified target doesn't exist, cannot delete it." 00:07:36.747 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:36.747 rmmod nvme_tcp 00:07:36.747 rmmod nvme_fabrics 00:07:36.747 rmmod nvme_keyring 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 701405 ']' 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 701405 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 701405 ']' 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 701405 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 701405 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 701405' 00:07:36.747 killing process with pid 701405 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 701405 00:07:36.747 16:00:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 701405 00:07:37.004 16:00:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:37.004 16:00:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:37.004 16:00:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:37.004 16:00:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:37.004 16:00:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:37.004 16:00:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.004 16:00:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.004 16:00:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.533 16:00:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:39.533 00:07:39.533 real 0m8.702s 00:07:39.533 user 0m19.920s 00:07:39.533 sys 0m2.455s 00:07:39.533 16:00:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.533 16:00:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:39.533 ************************************ 00:07:39.533 END TEST nvmf_invalid 00:07:39.533 ************************************ 00:07:39.533 16:00:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:39.533 16:00:24 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:39.533 16:00:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:39.533 16:00:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.533 16:00:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.533 ************************************ 00:07:39.533 START TEST nvmf_abort 00:07:39.533 ************************************ 00:07:39.533 16:00:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:39.533 * Looking for test storage... 00:07:39.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:39.533 16:00:25 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:39.534 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:39.534 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.534 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:39.534 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:39.534 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:39.534 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.534 16:00:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.534 16:00:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.534 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:39.534 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:39.534 16:00:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:39.534 16:00:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:41.525 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:41.525 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.525 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:41.526 Found net devices under 0000:09:00.0: cvl_0_0 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:41.526 Found net devices under 0000:09:00.1: cvl_0_1 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:07:41.526 00:07:41.526 --- 10.0.0.2 ping statistics --- 00:07:41.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.526 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:07:41.526 00:07:41.526 --- 10.0.0.1 ping statistics --- 00:07:41.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.526 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=704038 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 704038 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 704038 ']' 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:41.526 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.526 [2024-07-15 16:00:27.338057] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:07:41.526 [2024-07-15 16:00:27.338138] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.526 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.526 [2024-07-15 16:00:27.400437] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.526 [2024-07-15 16:00:27.503127] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.526 [2024-07-15 16:00:27.503175] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.526 [2024-07-15 16:00:27.503204] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.526 [2024-07-15 16:00:27.503217] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.526 [2024-07-15 16:00:27.503227] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.526 [2024-07-15 16:00:27.503358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.526 [2024-07-15 16:00:27.503421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.526 [2024-07-15 16:00:27.503424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.783 [2024-07-15 16:00:27.630813] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.783 Malloc0 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.783 Delay0 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.783 [2024-07-15 16:00:27.702056] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.783 16:00:27 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:41.783 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.042 [2024-07-15 16:00:27.806867] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:43.944 Initializing NVMe Controllers 00:07:43.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:43.944 controller IO queue size 128 less than required 00:07:43.944 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:43.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:43.944 Initialization complete. Launching workers. 00:07:43.944 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33156 00:07:43.944 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33217, failed to submit 62 00:07:43.944 success 33160, unsuccess 57, failed 0 00:07:43.944 16:00:29 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:43.944 16:00:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.944 16:00:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:43.944 16:00:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.944 16:00:29 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:43.944 16:00:29 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:43.944 16:00:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:43.944 16:00:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:43.944 16:00:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:43.944 16:00:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:43.944 16:00:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:43.944 16:00:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:43.944 rmmod nvme_tcp 00:07:44.203 rmmod nvme_fabrics 00:07:44.203 rmmod nvme_keyring 00:07:44.203 16:00:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:44.203 16:00:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:44.203 16:00:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:44.203 16:00:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 704038 ']' 00:07:44.203 16:00:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 704038 00:07:44.203 16:00:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 704038 ']' 00:07:44.203 16:00:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 704038 00:07:44.203 16:00:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:44.203 16:00:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.203 16:00:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 704038 00:07:44.203 16:00:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:44.203 16:00:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:44.203 16:00:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 704038' 00:07:44.203 killing process with pid 704038 00:07:44.203 16:00:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 704038 00:07:44.203 16:00:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 704038 00:07:44.462 16:00:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:44.462 16:00:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:44.462 16:00:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:44.462 16:00:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:44.462 16:00:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:44.462 16:00:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.462 16:00:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.462 16:00:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.367 16:00:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:46.367 00:07:46.367 real 0m7.362s 00:07:46.367 user 0m10.699s 00:07:46.367 sys 0m2.503s 00:07:46.367 16:00:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.367 16:00:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:46.367 ************************************ 00:07:46.367 END TEST nvmf_abort 00:07:46.367 ************************************ 00:07:46.367 16:00:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:46.367 16:00:32 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:46.367 16:00:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:46.367 16:00:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.367 16:00:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.624 ************************************ 00:07:46.624 START TEST nvmf_ns_hotplug_stress 00:07:46.624 ************************************ 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:46.624 * Looking for test storage... 00:07:46.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:46.624 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.625 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:46.625 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:46.625 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.625 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:46.625 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:46.625 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:46.625 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.625 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.625 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.625 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:46.625 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:46.625 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:46.625 16:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:48.528 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:48.528 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:48.528 Found net devices under 0000:09:00.0: cvl_0_0 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:48.528 Found net devices under 0000:09:00.1: cvl_0_1 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.528 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:48.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:07:48.789 00:07:48.789 --- 10.0.0.2 ping statistics --- 00:07:48.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.789 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:07:48.789 00:07:48.789 --- 10.0.0.1 ping statistics --- 00:07:48.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.789 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=706267 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 706267 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 706267 ']' 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.789 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:48.789 [2024-07-15 16:00:34.711604] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:07:48.789 [2024-07-15 16:00:34.711677] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.789 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.789 [2024-07-15 16:00:34.775574] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.045 [2024-07-15 16:00:34.880899] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.045 [2024-07-15 16:00:34.880979] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.045 [2024-07-15 16:00:34.881001] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.045 [2024-07-15 16:00:34.881013] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.045 [2024-07-15 16:00:34.881036] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.045 [2024-07-15 16:00:34.881130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.045 [2024-07-15 16:00:34.881262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.045 [2024-07-15 16:00:34.881265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.045 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.045 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:49.045 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:49.045 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:49.045 16:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:49.045 16:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.045 16:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:49.045 16:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:49.302 [2024-07-15 16:00:35.241964] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.302 16:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:49.561 16:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.819 [2024-07-15 16:00:35.736695] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.819 16:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:50.077 16:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:50.335 Malloc0 00:07:50.335 16:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:50.593 Delay0 00:07:50.593 16:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.851 16:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:51.107 NULL1 00:07:51.107 16:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:51.362 16:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=706624 00:07:51.362 16:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:51.362 16:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:07:51.362 16:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.362 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.736 Read completed with error (sct=0, sc=11) 00:07:52.736 16:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.993 16:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:52.993 16:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:52.993 true 00:07:52.993 16:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:07:52.993 16:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.928 16:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.186 16:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:54.186 16:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:54.442 true 00:07:54.442 16:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:07:54.442 16:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.699 16:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.956 16:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:54.956 16:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:55.212 true 00:07:55.212 16:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:07:55.212 16:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.470 16:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.729 16:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:55.729 16:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:55.729 true 00:07:56.030 16:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:07:56.030 16:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.965 16:00:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.222 16:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:57.222 16:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:57.478 true 00:07:57.478 16:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:07:57.478 16:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.443 16:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.443 16:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:58.443 16:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:58.701 true 00:07:58.701 16:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:07:58.701 16:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.976 16:00:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.241 16:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:59.241 16:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:59.497 true 00:07:59.497 16:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:07:59.497 16:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.754 16:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.011 16:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:00.011 16:00:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:00.296 true 00:08:00.296 16:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:00.296 16:00:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.228 16:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.485 16:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:01.485 16:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:01.742 true 00:08:01.742 16:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:01.742 16:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.000 16:00:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.258 16:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:02.258 16:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:02.530 true 00:08:02.530 16:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:02.530 16:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.800 16:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.056 16:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:03.056 16:00:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:03.313 true 00:08:03.313 16:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:03.313 16:00:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.243 16:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.521 16:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:04.521 16:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:04.778 true 00:08:05.037 16:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:05.037 16:00:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.037 16:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.295 16:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:05.295 16:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:05.552 true 00:08:05.552 16:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:05.552 16:00:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.489 16:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.746 16:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:06.746 16:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:07.003 true 00:08:07.003 16:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:07.003 16:00:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.259 16:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.515 16:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:07.515 16:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:07.771 true 00:08:07.771 16:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:07.771 16:00:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.703 16:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.961 16:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:08.961 16:00:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:09.218 true 00:08:09.218 16:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:09.218 16:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.476 16:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.733 16:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:09.733 16:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:09.989 true 00:08:09.989 16:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:09.989 16:00:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.920 16:00:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.177 16:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:11.177 16:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:11.434 true 00:08:11.434 16:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:11.434 16:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.691 16:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.948 16:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:11.948 16:00:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:12.205 true 00:08:12.206 16:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:12.206 16:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.463 16:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.721 16:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:12.721 16:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:12.978 true 00:08:12.978 16:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:12.978 16:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.375 16:00:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.375 16:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:14.375 16:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:14.667 true 00:08:14.667 16:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:14.667 16:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.232 16:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.490 16:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:15.490 16:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:15.748 true 00:08:15.748 16:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:15.748 16:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.005 16:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.263 16:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:16.263 16:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:16.520 true 00:08:16.520 16:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:16.520 16:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.458 16:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.714 16:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:17.714 16:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:17.971 true 00:08:17.971 16:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:17.971 16:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.229 16:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.486 16:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:18.486 16:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:18.744 true 00:08:18.744 16:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:18.744 16:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.678 16:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.937 16:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:19.937 16:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:19.937 true 00:08:19.937 16:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:19.937 16:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.503 16:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.503 16:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:20.503 16:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:20.761 true 00:08:20.761 16:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:20.761 16:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.698 16:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.698 Initializing NVMe Controllers 00:08:21.698 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:21.698 Controller IO queue size 128, less than required. 00:08:21.698 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:21.698 Controller IO queue size 128, less than required. 00:08:21.698 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:21.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:21.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:21.698 Initialization complete. Launching workers. 00:08:21.698 ======================================================== 00:08:21.698 Latency(us) 00:08:21.698 Device Information : IOPS MiB/s Average min max 00:08:21.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1086.28 0.53 62311.20 2379.51 1049623.39 00:08:21.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11083.25 5.41 11550.04 2772.91 364077.12 00:08:21.699 ======================================================== 00:08:21.699 Total : 12169.53 5.94 16081.11 2379.51 1049623.39 00:08:21.699 00:08:21.957 16:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:21.957 16:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:22.216 true 00:08:22.216 16:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 706624 00:08:22.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (706624) - No such process 00:08:22.216 16:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 706624 00:08:22.216 16:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.216 16:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.473 16:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:22.473 16:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:22.473 16:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:22.473 16:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:22.473 16:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:22.731 null0 00:08:22.731 16:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:22.731 16:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:22.731 16:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:22.989 null1 00:08:22.989 16:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:22.989 16:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:22.989 16:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:23.247 null2 00:08:23.247 16:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:23.247 16:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:23.247 16:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:23.505 null3 00:08:23.505 16:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:23.505 16:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:23.505 16:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:23.763 null4 00:08:23.763 16:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:23.763 16:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:23.763 16:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:24.021 null5 00:08:24.021 16:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:24.021 16:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:24.021 16:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:24.279 null6 00:08:24.279 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:24.279 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:24.279 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:24.537 null7 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 710616 710617 710619 710621 710623 710625 710627 710629 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.537 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:24.795 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:24.795 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:24.795 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.795 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:24.795 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:24.795 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:24.795 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:24.795 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.054 16:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:25.312 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:25.312 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:25.312 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:25.312 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:25.312 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:25.312 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:25.312 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:25.312 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.582 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:25.840 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:25.840 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:25.840 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:25.840 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:25.840 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:25.840 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:25.840 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:25.840 16:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.098 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:26.357 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.357 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.358 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:26.358 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.358 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.358 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:26.358 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:26.358 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:26.358 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.358 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:26.616 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:26.616 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:26.616 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:26.616 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.874 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:27.132 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:27.132 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:27.132 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.132 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:27.132 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:27.132 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:27.132 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:27.132 16:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.390 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:27.648 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.648 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:27.648 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:27.648 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:27.648 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:27.648 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:27.648 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:27.648 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.905 16:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:28.163 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.163 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:28.163 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:28.163 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:28.163 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:28.163 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:28.163 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:28.163 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.421 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.422 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:28.422 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.422 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.422 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:28.679 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:28.679 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.679 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:28.679 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:28.679 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:28.679 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:28.679 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:28.679 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.938 16:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:29.196 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.196 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:29.197 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:29.197 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:29.197 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:29.197 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:29.197 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:29.197 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.455 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:29.713 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:29.713 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.713 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:29.713 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:29.713 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:29.713 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:29.713 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:29.713 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.001 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:30.002 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.002 16:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.002 rmmod nvme_tcp 00:08:30.262 rmmod nvme_fabrics 00:08:30.262 rmmod nvme_keyring 00:08:30.262 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.262 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:30.262 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:30.262 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 706267 ']' 00:08:30.262 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 706267 00:08:30.262 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 706267 ']' 00:08:30.262 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 706267 00:08:30.262 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:30.262 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:30.262 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 706267 00:08:30.262 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:30.262 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:30.262 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 706267' 00:08:30.262 killing process with pid 706267 00:08:30.262 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 706267 00:08:30.262 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 706267 00:08:30.522 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:30.522 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:30.522 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:30.522 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:30.522 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:30.522 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.522 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.522 16:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.428 16:01:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:32.428 00:08:32.428 real 0m45.994s 00:08:32.428 user 3m28.774s 00:08:32.428 sys 0m16.727s 00:08:32.428 16:01:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.428 16:01:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.428 ************************************ 00:08:32.428 END TEST nvmf_ns_hotplug_stress 00:08:32.428 ************************************ 00:08:32.428 16:01:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:32.428 16:01:18 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:32.428 16:01:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:32.428 16:01:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.428 16:01:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:32.687 ************************************ 00:08:32.687 START TEST nvmf_connect_stress 00:08:32.687 ************************************ 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:32.687 * Looking for test storage... 00:08:32.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:32.687 16:01:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:34.593 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:34.593 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:34.593 Found net devices under 0000:09:00.0: cvl_0_0 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:34.593 Found net devices under 0000:09:00.1: cvl_0_1 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:34.593 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:34.594 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:34.594 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.594 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.594 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.594 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:34.594 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.594 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.594 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:34.594 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.594 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.594 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:34.594 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:34.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:08:34.852 00:08:34.852 --- 10.0.0.2 ping statistics --- 00:08:34.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.852 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:08:34.852 00:08:34.852 --- 10.0.0.1 ping statistics --- 00:08:34.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.852 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:34.852 16:01:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:34.853 16:01:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.853 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=713380 00:08:34.853 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:34.853 16:01:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 713380 00:08:34.853 16:01:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 713380 ']' 00:08:34.853 16:01:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.853 16:01:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:34.853 16:01:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.853 16:01:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:34.853 16:01:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.853 [2024-07-15 16:01:20.790477] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:08:34.853 [2024-07-15 16:01:20.790554] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.853 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.111 [2024-07-15 16:01:20.856081] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:35.111 [2024-07-15 16:01:20.968811] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.111 [2024-07-15 16:01:20.968869] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.111 [2024-07-15 16:01:20.968897] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.111 [2024-07-15 16:01:20.968908] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.111 [2024-07-15 16:01:20.968918] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.111 [2024-07-15 16:01:20.969005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.111 [2024-07-15 16:01:20.969071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.111 [2024-07-15 16:01:20.969074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.111 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:35.111 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:35.111 16:01:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:35.111 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:35.111 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.111 16:01:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.111 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.111 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.111 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.370 [2024-07-15 16:01:21.117616] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.370 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.370 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:35.370 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.370 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.370 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.370 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.370 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.370 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.370 [2024-07-15 16:01:21.142121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.370 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.370 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:35.370 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.370 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.370 NULL1 00:08:35.370 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.370 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=713522 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.371 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.627 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.627 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:35.627 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.628 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.628 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.886 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.886 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:35.886 16:01:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.886 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.886 16:01:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.453 16:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.453 16:01:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:36.453 16:01:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.453 16:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.453 16:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.710 16:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.710 16:01:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:36.710 16:01:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.710 16:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.710 16:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.969 16:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.969 16:01:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:36.969 16:01:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.969 16:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.969 16:01:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.227 16:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.227 16:01:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:37.227 16:01:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.227 16:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.227 16:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.485 16:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.485 16:01:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:37.485 16:01:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.485 16:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.485 16:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.050 16:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.050 16:01:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:38.050 16:01:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.050 16:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.050 16:01:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.306 16:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.306 16:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:38.306 16:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.306 16:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.306 16:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.564 16:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.564 16:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:38.564 16:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.564 16:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.564 16:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.823 16:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.823 16:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:38.823 16:01:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.823 16:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.823 16:01:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.081 16:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.081 16:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:39.081 16:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.081 16:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.081 16:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.647 16:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.647 16:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:39.647 16:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.647 16:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.647 16:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.907 16:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.907 16:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:39.907 16:01:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.907 16:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.907 16:01:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.166 16:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.166 16:01:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:40.166 16:01:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.166 16:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.166 16:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.424 16:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.424 16:01:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:40.424 16:01:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.424 16:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.424 16:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.681 16:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.681 16:01:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:40.681 16:01:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.681 16:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.681 16:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.251 16:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.251 16:01:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:41.251 16:01:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.251 16:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.251 16:01:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.509 16:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.509 16:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:41.509 16:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.509 16:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.509 16:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.767 16:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.767 16:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:41.767 16:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.767 16:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.767 16:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.026 16:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.026 16:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:42.026 16:01:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.026 16:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.026 16:01:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.286 16:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.286 16:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:42.286 16:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.286 16:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.286 16:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.854 16:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.854 16:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:42.854 16:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.854 16:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.854 16:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.111 16:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.111 16:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:43.111 16:01:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.111 16:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.111 16:01:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.370 16:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.370 16:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:43.370 16:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.370 16:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.370 16:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.630 16:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.630 16:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:43.630 16:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.630 16:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.630 16:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.889 16:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.889 16:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:43.889 16:01:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.889 16:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.889 16:01:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.458 16:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.458 16:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:44.458 16:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.458 16:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.458 16:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.719 16:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.719 16:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:44.719 16:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.719 16:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.719 16:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.978 16:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.978 16:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:44.978 16:01:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.978 16:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.978 16:01:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.237 16:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.237 16:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:45.237 16:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.237 16:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.237 16:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.496 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:45.496 16:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.496 16:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 713522 00:08:45.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (713522) - No such process 00:08:45.496 16:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 713522 00:08:45.496 16:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:45.496 16:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:45.496 16:01:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:45.496 16:01:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:45.496 16:01:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:45.496 16:01:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.496 16:01:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:45.496 16:01:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.496 16:01:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.496 rmmod nvme_tcp 00:08:45.756 rmmod nvme_fabrics 00:08:45.756 rmmod nvme_keyring 00:08:45.756 16:01:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.756 16:01:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:45.756 16:01:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:45.756 16:01:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 713380 ']' 00:08:45.756 16:01:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 713380 00:08:45.756 16:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 713380 ']' 00:08:45.756 16:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 713380 00:08:45.756 16:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:45.756 16:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:45.756 16:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 713380 00:08:45.756 16:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:45.756 16:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:45.756 16:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 713380' 00:08:45.756 killing process with pid 713380 00:08:45.756 16:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 713380 00:08:45.756 16:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 713380 00:08:46.015 16:01:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:46.015 16:01:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:46.015 16:01:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:46.015 16:01:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.015 16:01:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:46.015 16:01:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.015 16:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.015 16:01:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.959 16:01:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:47.959 00:08:47.959 real 0m15.428s 00:08:47.959 user 0m38.337s 00:08:47.959 sys 0m5.994s 00:08:47.959 16:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:47.959 16:01:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.959 ************************************ 00:08:47.959 END TEST nvmf_connect_stress 00:08:47.959 ************************************ 00:08:47.959 16:01:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:47.959 16:01:33 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:47.959 16:01:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:47.959 16:01:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.959 16:01:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:47.959 ************************************ 00:08:47.959 START TEST nvmf_fused_ordering 00:08:47.959 ************************************ 00:08:47.959 16:01:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:48.218 * Looking for test storage... 00:08:48.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:08:48.218 16:01:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:50.124 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.124 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:50.125 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:50.125 Found net devices under 0000:09:00.0: cvl_0_0 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:50.125 Found net devices under 0000:09:00.1: cvl_0_1 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.125 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:50.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:08:50.386 00:08:50.386 --- 10.0.0.2 ping statistics --- 00:08:50.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.386 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:08:50.386 00:08:50.386 --- 10.0.0.1 ping statistics --- 00:08:50.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.386 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=716677 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 716677 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 716677 ']' 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.386 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:50.386 [2024-07-15 16:01:36.268514] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:08:50.386 [2024-07-15 16:01:36.268601] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.386 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.386 [2024-07-15 16:01:36.331820] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.645 [2024-07-15 16:01:36.442624] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.645 [2024-07-15 16:01:36.442688] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.645 [2024-07-15 16:01:36.442701] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.645 [2024-07-15 16:01:36.442712] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.645 [2024-07-15 16:01:36.442722] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.645 [2024-07-15 16:01:36.442749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:50.645 [2024-07-15 16:01:36.582357] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:50.645 [2024-07-15 16:01:36.598546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:50.645 NULL1 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.645 16:01:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:50.645 [2024-07-15 16:01:36.645430] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:08:50.645 [2024-07-15 16:01:36.645472] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid716702 ] 00:08:50.904 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.164 Attached to nqn.2016-06.io.spdk:cnode1 00:08:51.164 Namespace ID: 1 size: 1GB 00:08:51.164 fused_ordering(0) 00:08:51.164 fused_ordering(1) 00:08:51.164 fused_ordering(2) 00:08:51.164 fused_ordering(3) 00:08:51.164 fused_ordering(4) 00:08:51.164 fused_ordering(5) 00:08:51.164 fused_ordering(6) 00:08:51.164 fused_ordering(7) 00:08:51.164 fused_ordering(8) 00:08:51.164 fused_ordering(9) 00:08:51.164 fused_ordering(10) 00:08:51.164 fused_ordering(11) 00:08:51.164 fused_ordering(12) 00:08:51.164 fused_ordering(13) 00:08:51.164 fused_ordering(14) 00:08:51.164 fused_ordering(15) 00:08:51.164 fused_ordering(16) 00:08:51.164 fused_ordering(17) 00:08:51.164 fused_ordering(18) 00:08:51.164 fused_ordering(19) 00:08:51.164 fused_ordering(20) 00:08:51.164 fused_ordering(21) 00:08:51.164 fused_ordering(22) 00:08:51.164 fused_ordering(23) 00:08:51.164 fused_ordering(24) 00:08:51.164 fused_ordering(25) 00:08:51.164 fused_ordering(26) 00:08:51.164 fused_ordering(27) 00:08:51.164 fused_ordering(28) 00:08:51.164 fused_ordering(29) 00:08:51.164 fused_ordering(30) 00:08:51.164 fused_ordering(31) 00:08:51.164 fused_ordering(32) 00:08:51.164 fused_ordering(33) 00:08:51.164 fused_ordering(34) 00:08:51.164 fused_ordering(35) 00:08:51.164 fused_ordering(36) 00:08:51.164 fused_ordering(37) 00:08:51.164 fused_ordering(38) 00:08:51.164 fused_ordering(39) 00:08:51.164 fused_ordering(40) 00:08:51.164 fused_ordering(41) 00:08:51.164 fused_ordering(42) 00:08:51.164 fused_ordering(43) 00:08:51.164 fused_ordering(44) 00:08:51.164 fused_ordering(45) 00:08:51.164 fused_ordering(46) 00:08:51.164 fused_ordering(47) 00:08:51.164 fused_ordering(48) 00:08:51.164 fused_ordering(49) 00:08:51.164 fused_ordering(50) 00:08:51.164 fused_ordering(51) 00:08:51.164 fused_ordering(52) 00:08:51.164 fused_ordering(53) 00:08:51.164 fused_ordering(54) 00:08:51.164 fused_ordering(55) 00:08:51.164 fused_ordering(56) 00:08:51.164 fused_ordering(57) 00:08:51.164 fused_ordering(58) 00:08:51.164 fused_ordering(59) 00:08:51.164 fused_ordering(60) 00:08:51.164 fused_ordering(61) 00:08:51.164 fused_ordering(62) 00:08:51.164 fused_ordering(63) 00:08:51.164 fused_ordering(64) 00:08:51.164 fused_ordering(65) 00:08:51.164 fused_ordering(66) 00:08:51.164 fused_ordering(67) 00:08:51.164 fused_ordering(68) 00:08:51.164 fused_ordering(69) 00:08:51.164 fused_ordering(70) 00:08:51.164 fused_ordering(71) 00:08:51.164 fused_ordering(72) 00:08:51.164 fused_ordering(73) 00:08:51.164 fused_ordering(74) 00:08:51.164 fused_ordering(75) 00:08:51.164 fused_ordering(76) 00:08:51.164 fused_ordering(77) 00:08:51.164 fused_ordering(78) 00:08:51.164 fused_ordering(79) 00:08:51.164 fused_ordering(80) 00:08:51.164 fused_ordering(81) 00:08:51.164 fused_ordering(82) 00:08:51.164 fused_ordering(83) 00:08:51.164 fused_ordering(84) 00:08:51.164 fused_ordering(85) 00:08:51.164 fused_ordering(86) 00:08:51.164 fused_ordering(87) 00:08:51.164 fused_ordering(88) 00:08:51.164 fused_ordering(89) 00:08:51.164 fused_ordering(90) 00:08:51.164 fused_ordering(91) 00:08:51.164 fused_ordering(92) 00:08:51.164 fused_ordering(93) 00:08:51.164 fused_ordering(94) 00:08:51.164 fused_ordering(95) 00:08:51.164 fused_ordering(96) 00:08:51.164 fused_ordering(97) 00:08:51.164 fused_ordering(98) 00:08:51.164 fused_ordering(99) 00:08:51.164 fused_ordering(100) 00:08:51.164 fused_ordering(101) 00:08:51.164 fused_ordering(102) 00:08:51.164 fused_ordering(103) 00:08:51.164 fused_ordering(104) 00:08:51.164 fused_ordering(105) 00:08:51.164 fused_ordering(106) 00:08:51.164 fused_ordering(107) 00:08:51.164 fused_ordering(108) 00:08:51.164 fused_ordering(109) 00:08:51.164 fused_ordering(110) 00:08:51.164 fused_ordering(111) 00:08:51.164 fused_ordering(112) 00:08:51.164 fused_ordering(113) 00:08:51.164 fused_ordering(114) 00:08:51.164 fused_ordering(115) 00:08:51.164 fused_ordering(116) 00:08:51.164 fused_ordering(117) 00:08:51.164 fused_ordering(118) 00:08:51.164 fused_ordering(119) 00:08:51.164 fused_ordering(120) 00:08:51.164 fused_ordering(121) 00:08:51.164 fused_ordering(122) 00:08:51.164 fused_ordering(123) 00:08:51.164 fused_ordering(124) 00:08:51.164 fused_ordering(125) 00:08:51.164 fused_ordering(126) 00:08:51.164 fused_ordering(127) 00:08:51.164 fused_ordering(128) 00:08:51.164 fused_ordering(129) 00:08:51.164 fused_ordering(130) 00:08:51.164 fused_ordering(131) 00:08:51.164 fused_ordering(132) 00:08:51.164 fused_ordering(133) 00:08:51.164 fused_ordering(134) 00:08:51.164 fused_ordering(135) 00:08:51.164 fused_ordering(136) 00:08:51.164 fused_ordering(137) 00:08:51.164 fused_ordering(138) 00:08:51.164 fused_ordering(139) 00:08:51.164 fused_ordering(140) 00:08:51.164 fused_ordering(141) 00:08:51.164 fused_ordering(142) 00:08:51.164 fused_ordering(143) 00:08:51.164 fused_ordering(144) 00:08:51.164 fused_ordering(145) 00:08:51.164 fused_ordering(146) 00:08:51.164 fused_ordering(147) 00:08:51.164 fused_ordering(148) 00:08:51.164 fused_ordering(149) 00:08:51.164 fused_ordering(150) 00:08:51.164 fused_ordering(151) 00:08:51.164 fused_ordering(152) 00:08:51.164 fused_ordering(153) 00:08:51.164 fused_ordering(154) 00:08:51.164 fused_ordering(155) 00:08:51.164 fused_ordering(156) 00:08:51.164 fused_ordering(157) 00:08:51.164 fused_ordering(158) 00:08:51.164 fused_ordering(159) 00:08:51.164 fused_ordering(160) 00:08:51.164 fused_ordering(161) 00:08:51.164 fused_ordering(162) 00:08:51.164 fused_ordering(163) 00:08:51.164 fused_ordering(164) 00:08:51.164 fused_ordering(165) 00:08:51.164 fused_ordering(166) 00:08:51.164 fused_ordering(167) 00:08:51.164 fused_ordering(168) 00:08:51.164 fused_ordering(169) 00:08:51.164 fused_ordering(170) 00:08:51.164 fused_ordering(171) 00:08:51.164 fused_ordering(172) 00:08:51.164 fused_ordering(173) 00:08:51.164 fused_ordering(174) 00:08:51.164 fused_ordering(175) 00:08:51.165 fused_ordering(176) 00:08:51.165 fused_ordering(177) 00:08:51.165 fused_ordering(178) 00:08:51.165 fused_ordering(179) 00:08:51.165 fused_ordering(180) 00:08:51.165 fused_ordering(181) 00:08:51.165 fused_ordering(182) 00:08:51.165 fused_ordering(183) 00:08:51.165 fused_ordering(184) 00:08:51.165 fused_ordering(185) 00:08:51.165 fused_ordering(186) 00:08:51.165 fused_ordering(187) 00:08:51.165 fused_ordering(188) 00:08:51.165 fused_ordering(189) 00:08:51.165 fused_ordering(190) 00:08:51.165 fused_ordering(191) 00:08:51.165 fused_ordering(192) 00:08:51.165 fused_ordering(193) 00:08:51.165 fused_ordering(194) 00:08:51.165 fused_ordering(195) 00:08:51.165 fused_ordering(196) 00:08:51.165 fused_ordering(197) 00:08:51.165 fused_ordering(198) 00:08:51.165 fused_ordering(199) 00:08:51.165 fused_ordering(200) 00:08:51.165 fused_ordering(201) 00:08:51.165 fused_ordering(202) 00:08:51.165 fused_ordering(203) 00:08:51.165 fused_ordering(204) 00:08:51.165 fused_ordering(205) 00:08:51.425 fused_ordering(206) 00:08:51.425 fused_ordering(207) 00:08:51.425 fused_ordering(208) 00:08:51.425 fused_ordering(209) 00:08:51.425 fused_ordering(210) 00:08:51.425 fused_ordering(211) 00:08:51.425 fused_ordering(212) 00:08:51.425 fused_ordering(213) 00:08:51.425 fused_ordering(214) 00:08:51.425 fused_ordering(215) 00:08:51.425 fused_ordering(216) 00:08:51.425 fused_ordering(217) 00:08:51.425 fused_ordering(218) 00:08:51.425 fused_ordering(219) 00:08:51.425 fused_ordering(220) 00:08:51.425 fused_ordering(221) 00:08:51.425 fused_ordering(222) 00:08:51.425 fused_ordering(223) 00:08:51.425 fused_ordering(224) 00:08:51.425 fused_ordering(225) 00:08:51.425 fused_ordering(226) 00:08:51.425 fused_ordering(227) 00:08:51.425 fused_ordering(228) 00:08:51.425 fused_ordering(229) 00:08:51.425 fused_ordering(230) 00:08:51.425 fused_ordering(231) 00:08:51.425 fused_ordering(232) 00:08:51.425 fused_ordering(233) 00:08:51.425 fused_ordering(234) 00:08:51.425 fused_ordering(235) 00:08:51.425 fused_ordering(236) 00:08:51.425 fused_ordering(237) 00:08:51.425 fused_ordering(238) 00:08:51.425 fused_ordering(239) 00:08:51.425 fused_ordering(240) 00:08:51.425 fused_ordering(241) 00:08:51.425 fused_ordering(242) 00:08:51.425 fused_ordering(243) 00:08:51.425 fused_ordering(244) 00:08:51.425 fused_ordering(245) 00:08:51.425 fused_ordering(246) 00:08:51.425 fused_ordering(247) 00:08:51.425 fused_ordering(248) 00:08:51.425 fused_ordering(249) 00:08:51.425 fused_ordering(250) 00:08:51.425 fused_ordering(251) 00:08:51.425 fused_ordering(252) 00:08:51.425 fused_ordering(253) 00:08:51.425 fused_ordering(254) 00:08:51.425 fused_ordering(255) 00:08:51.425 fused_ordering(256) 00:08:51.425 fused_ordering(257) 00:08:51.425 fused_ordering(258) 00:08:51.425 fused_ordering(259) 00:08:51.425 fused_ordering(260) 00:08:51.425 fused_ordering(261) 00:08:51.425 fused_ordering(262) 00:08:51.425 fused_ordering(263) 00:08:51.425 fused_ordering(264) 00:08:51.425 fused_ordering(265) 00:08:51.425 fused_ordering(266) 00:08:51.425 fused_ordering(267) 00:08:51.425 fused_ordering(268) 00:08:51.425 fused_ordering(269) 00:08:51.425 fused_ordering(270) 00:08:51.425 fused_ordering(271) 00:08:51.425 fused_ordering(272) 00:08:51.425 fused_ordering(273) 00:08:51.425 fused_ordering(274) 00:08:51.425 fused_ordering(275) 00:08:51.425 fused_ordering(276) 00:08:51.425 fused_ordering(277) 00:08:51.425 fused_ordering(278) 00:08:51.425 fused_ordering(279) 00:08:51.425 fused_ordering(280) 00:08:51.425 fused_ordering(281) 00:08:51.425 fused_ordering(282) 00:08:51.425 fused_ordering(283) 00:08:51.425 fused_ordering(284) 00:08:51.425 fused_ordering(285) 00:08:51.425 fused_ordering(286) 00:08:51.425 fused_ordering(287) 00:08:51.425 fused_ordering(288) 00:08:51.425 fused_ordering(289) 00:08:51.425 fused_ordering(290) 00:08:51.425 fused_ordering(291) 00:08:51.425 fused_ordering(292) 00:08:51.425 fused_ordering(293) 00:08:51.425 fused_ordering(294) 00:08:51.425 fused_ordering(295) 00:08:51.425 fused_ordering(296) 00:08:51.425 fused_ordering(297) 00:08:51.425 fused_ordering(298) 00:08:51.425 fused_ordering(299) 00:08:51.425 fused_ordering(300) 00:08:51.425 fused_ordering(301) 00:08:51.425 fused_ordering(302) 00:08:51.425 fused_ordering(303) 00:08:51.425 fused_ordering(304) 00:08:51.425 fused_ordering(305) 00:08:51.425 fused_ordering(306) 00:08:51.425 fused_ordering(307) 00:08:51.425 fused_ordering(308) 00:08:51.425 fused_ordering(309) 00:08:51.425 fused_ordering(310) 00:08:51.425 fused_ordering(311) 00:08:51.425 fused_ordering(312) 00:08:51.425 fused_ordering(313) 00:08:51.425 fused_ordering(314) 00:08:51.425 fused_ordering(315) 00:08:51.425 fused_ordering(316) 00:08:51.425 fused_ordering(317) 00:08:51.425 fused_ordering(318) 00:08:51.425 fused_ordering(319) 00:08:51.425 fused_ordering(320) 00:08:51.425 fused_ordering(321) 00:08:51.425 fused_ordering(322) 00:08:51.425 fused_ordering(323) 00:08:51.425 fused_ordering(324) 00:08:51.425 fused_ordering(325) 00:08:51.425 fused_ordering(326) 00:08:51.425 fused_ordering(327) 00:08:51.425 fused_ordering(328) 00:08:51.425 fused_ordering(329) 00:08:51.425 fused_ordering(330) 00:08:51.425 fused_ordering(331) 00:08:51.425 fused_ordering(332) 00:08:51.425 fused_ordering(333) 00:08:51.425 fused_ordering(334) 00:08:51.425 fused_ordering(335) 00:08:51.425 fused_ordering(336) 00:08:51.425 fused_ordering(337) 00:08:51.426 fused_ordering(338) 00:08:51.426 fused_ordering(339) 00:08:51.426 fused_ordering(340) 00:08:51.426 fused_ordering(341) 00:08:51.426 fused_ordering(342) 00:08:51.426 fused_ordering(343) 00:08:51.426 fused_ordering(344) 00:08:51.426 fused_ordering(345) 00:08:51.426 fused_ordering(346) 00:08:51.426 fused_ordering(347) 00:08:51.426 fused_ordering(348) 00:08:51.426 fused_ordering(349) 00:08:51.426 fused_ordering(350) 00:08:51.426 fused_ordering(351) 00:08:51.426 fused_ordering(352) 00:08:51.426 fused_ordering(353) 00:08:51.426 fused_ordering(354) 00:08:51.426 fused_ordering(355) 00:08:51.426 fused_ordering(356) 00:08:51.426 fused_ordering(357) 00:08:51.426 fused_ordering(358) 00:08:51.426 fused_ordering(359) 00:08:51.426 fused_ordering(360) 00:08:51.426 fused_ordering(361) 00:08:51.426 fused_ordering(362) 00:08:51.426 fused_ordering(363) 00:08:51.426 fused_ordering(364) 00:08:51.426 fused_ordering(365) 00:08:51.426 fused_ordering(366) 00:08:51.426 fused_ordering(367) 00:08:51.426 fused_ordering(368) 00:08:51.426 fused_ordering(369) 00:08:51.426 fused_ordering(370) 00:08:51.426 fused_ordering(371) 00:08:51.426 fused_ordering(372) 00:08:51.426 fused_ordering(373) 00:08:51.426 fused_ordering(374) 00:08:51.426 fused_ordering(375) 00:08:51.426 fused_ordering(376) 00:08:51.426 fused_ordering(377) 00:08:51.426 fused_ordering(378) 00:08:51.426 fused_ordering(379) 00:08:51.426 fused_ordering(380) 00:08:51.426 fused_ordering(381) 00:08:51.426 fused_ordering(382) 00:08:51.426 fused_ordering(383) 00:08:51.426 fused_ordering(384) 00:08:51.426 fused_ordering(385) 00:08:51.426 fused_ordering(386) 00:08:51.426 fused_ordering(387) 00:08:51.426 fused_ordering(388) 00:08:51.426 fused_ordering(389) 00:08:51.426 fused_ordering(390) 00:08:51.426 fused_ordering(391) 00:08:51.426 fused_ordering(392) 00:08:51.426 fused_ordering(393) 00:08:51.426 fused_ordering(394) 00:08:51.426 fused_ordering(395) 00:08:51.426 fused_ordering(396) 00:08:51.426 fused_ordering(397) 00:08:51.426 fused_ordering(398) 00:08:51.426 fused_ordering(399) 00:08:51.426 fused_ordering(400) 00:08:51.426 fused_ordering(401) 00:08:51.426 fused_ordering(402) 00:08:51.426 fused_ordering(403) 00:08:51.426 fused_ordering(404) 00:08:51.426 fused_ordering(405) 00:08:51.426 fused_ordering(406) 00:08:51.426 fused_ordering(407) 00:08:51.426 fused_ordering(408) 00:08:51.426 fused_ordering(409) 00:08:51.426 fused_ordering(410) 00:08:51.993 fused_ordering(411) 00:08:51.993 fused_ordering(412) 00:08:51.993 fused_ordering(413) 00:08:51.993 fused_ordering(414) 00:08:51.993 fused_ordering(415) 00:08:51.993 fused_ordering(416) 00:08:51.993 fused_ordering(417) 00:08:51.993 fused_ordering(418) 00:08:51.993 fused_ordering(419) 00:08:51.993 fused_ordering(420) 00:08:51.993 fused_ordering(421) 00:08:51.993 fused_ordering(422) 00:08:51.993 fused_ordering(423) 00:08:51.993 fused_ordering(424) 00:08:51.993 fused_ordering(425) 00:08:51.993 fused_ordering(426) 00:08:51.993 fused_ordering(427) 00:08:51.993 fused_ordering(428) 00:08:51.993 fused_ordering(429) 00:08:51.993 fused_ordering(430) 00:08:51.993 fused_ordering(431) 00:08:51.993 fused_ordering(432) 00:08:51.993 fused_ordering(433) 00:08:51.993 fused_ordering(434) 00:08:51.993 fused_ordering(435) 00:08:51.993 fused_ordering(436) 00:08:51.993 fused_ordering(437) 00:08:51.993 fused_ordering(438) 00:08:51.993 fused_ordering(439) 00:08:51.993 fused_ordering(440) 00:08:51.993 fused_ordering(441) 00:08:51.993 fused_ordering(442) 00:08:51.993 fused_ordering(443) 00:08:51.993 fused_ordering(444) 00:08:51.993 fused_ordering(445) 00:08:51.993 fused_ordering(446) 00:08:51.993 fused_ordering(447) 00:08:51.993 fused_ordering(448) 00:08:51.993 fused_ordering(449) 00:08:51.993 fused_ordering(450) 00:08:51.993 fused_ordering(451) 00:08:51.993 fused_ordering(452) 00:08:51.993 fused_ordering(453) 00:08:51.993 fused_ordering(454) 00:08:51.993 fused_ordering(455) 00:08:51.993 fused_ordering(456) 00:08:51.993 fused_ordering(457) 00:08:51.993 fused_ordering(458) 00:08:51.993 fused_ordering(459) 00:08:51.993 fused_ordering(460) 00:08:51.993 fused_ordering(461) 00:08:51.993 fused_ordering(462) 00:08:51.993 fused_ordering(463) 00:08:51.993 fused_ordering(464) 00:08:51.993 fused_ordering(465) 00:08:51.993 fused_ordering(466) 00:08:51.993 fused_ordering(467) 00:08:51.993 fused_ordering(468) 00:08:51.993 fused_ordering(469) 00:08:51.993 fused_ordering(470) 00:08:51.993 fused_ordering(471) 00:08:51.993 fused_ordering(472) 00:08:51.993 fused_ordering(473) 00:08:51.993 fused_ordering(474) 00:08:51.993 fused_ordering(475) 00:08:51.993 fused_ordering(476) 00:08:51.993 fused_ordering(477) 00:08:51.993 fused_ordering(478) 00:08:51.993 fused_ordering(479) 00:08:51.993 fused_ordering(480) 00:08:51.993 fused_ordering(481) 00:08:51.993 fused_ordering(482) 00:08:51.993 fused_ordering(483) 00:08:51.993 fused_ordering(484) 00:08:51.993 fused_ordering(485) 00:08:51.993 fused_ordering(486) 00:08:51.993 fused_ordering(487) 00:08:51.993 fused_ordering(488) 00:08:51.993 fused_ordering(489) 00:08:51.993 fused_ordering(490) 00:08:51.993 fused_ordering(491) 00:08:51.993 fused_ordering(492) 00:08:51.993 fused_ordering(493) 00:08:51.993 fused_ordering(494) 00:08:51.993 fused_ordering(495) 00:08:51.993 fused_ordering(496) 00:08:51.993 fused_ordering(497) 00:08:51.993 fused_ordering(498) 00:08:51.993 fused_ordering(499) 00:08:51.993 fused_ordering(500) 00:08:51.993 fused_ordering(501) 00:08:51.993 fused_ordering(502) 00:08:51.993 fused_ordering(503) 00:08:51.993 fused_ordering(504) 00:08:51.993 fused_ordering(505) 00:08:51.993 fused_ordering(506) 00:08:51.993 fused_ordering(507) 00:08:51.993 fused_ordering(508) 00:08:51.993 fused_ordering(509) 00:08:51.993 fused_ordering(510) 00:08:51.993 fused_ordering(511) 00:08:51.993 fused_ordering(512) 00:08:51.993 fused_ordering(513) 00:08:51.993 fused_ordering(514) 00:08:51.993 fused_ordering(515) 00:08:51.993 fused_ordering(516) 00:08:51.993 fused_ordering(517) 00:08:51.993 fused_ordering(518) 00:08:51.993 fused_ordering(519) 00:08:51.993 fused_ordering(520) 00:08:51.993 fused_ordering(521) 00:08:51.993 fused_ordering(522) 00:08:51.993 fused_ordering(523) 00:08:51.993 fused_ordering(524) 00:08:51.993 fused_ordering(525) 00:08:51.993 fused_ordering(526) 00:08:51.993 fused_ordering(527) 00:08:51.993 fused_ordering(528) 00:08:51.993 fused_ordering(529) 00:08:51.993 fused_ordering(530) 00:08:51.993 fused_ordering(531) 00:08:51.993 fused_ordering(532) 00:08:51.993 fused_ordering(533) 00:08:51.993 fused_ordering(534) 00:08:51.993 fused_ordering(535) 00:08:51.993 fused_ordering(536) 00:08:51.993 fused_ordering(537) 00:08:51.993 fused_ordering(538) 00:08:51.993 fused_ordering(539) 00:08:51.993 fused_ordering(540) 00:08:51.993 fused_ordering(541) 00:08:51.993 fused_ordering(542) 00:08:51.993 fused_ordering(543) 00:08:51.993 fused_ordering(544) 00:08:51.993 fused_ordering(545) 00:08:51.993 fused_ordering(546) 00:08:51.993 fused_ordering(547) 00:08:51.993 fused_ordering(548) 00:08:51.993 fused_ordering(549) 00:08:51.993 fused_ordering(550) 00:08:51.993 fused_ordering(551) 00:08:51.993 fused_ordering(552) 00:08:51.993 fused_ordering(553) 00:08:51.993 fused_ordering(554) 00:08:51.993 fused_ordering(555) 00:08:51.993 fused_ordering(556) 00:08:51.993 fused_ordering(557) 00:08:51.993 fused_ordering(558) 00:08:51.993 fused_ordering(559) 00:08:51.993 fused_ordering(560) 00:08:51.993 fused_ordering(561) 00:08:51.993 fused_ordering(562) 00:08:51.993 fused_ordering(563) 00:08:51.993 fused_ordering(564) 00:08:51.993 fused_ordering(565) 00:08:51.993 fused_ordering(566) 00:08:51.993 fused_ordering(567) 00:08:51.993 fused_ordering(568) 00:08:51.993 fused_ordering(569) 00:08:51.993 fused_ordering(570) 00:08:51.993 fused_ordering(571) 00:08:51.993 fused_ordering(572) 00:08:51.993 fused_ordering(573) 00:08:51.993 fused_ordering(574) 00:08:51.993 fused_ordering(575) 00:08:51.993 fused_ordering(576) 00:08:51.993 fused_ordering(577) 00:08:51.993 fused_ordering(578) 00:08:51.993 fused_ordering(579) 00:08:51.993 fused_ordering(580) 00:08:51.993 fused_ordering(581) 00:08:51.993 fused_ordering(582) 00:08:51.994 fused_ordering(583) 00:08:51.994 fused_ordering(584) 00:08:51.994 fused_ordering(585) 00:08:51.994 fused_ordering(586) 00:08:51.994 fused_ordering(587) 00:08:51.994 fused_ordering(588) 00:08:51.994 fused_ordering(589) 00:08:51.994 fused_ordering(590) 00:08:51.994 fused_ordering(591) 00:08:51.994 fused_ordering(592) 00:08:51.994 fused_ordering(593) 00:08:51.994 fused_ordering(594) 00:08:51.994 fused_ordering(595) 00:08:51.994 fused_ordering(596) 00:08:51.994 fused_ordering(597) 00:08:51.994 fused_ordering(598) 00:08:51.994 fused_ordering(599) 00:08:51.994 fused_ordering(600) 00:08:51.994 fused_ordering(601) 00:08:51.994 fused_ordering(602) 00:08:51.994 fused_ordering(603) 00:08:51.994 fused_ordering(604) 00:08:51.994 fused_ordering(605) 00:08:51.994 fused_ordering(606) 00:08:51.994 fused_ordering(607) 00:08:51.994 fused_ordering(608) 00:08:51.994 fused_ordering(609) 00:08:51.994 fused_ordering(610) 00:08:51.994 fused_ordering(611) 00:08:51.994 fused_ordering(612) 00:08:51.994 fused_ordering(613) 00:08:51.994 fused_ordering(614) 00:08:51.994 fused_ordering(615) 00:08:52.561 fused_ordering(616) 00:08:52.561 fused_ordering(617) 00:08:52.561 fused_ordering(618) 00:08:52.561 fused_ordering(619) 00:08:52.561 fused_ordering(620) 00:08:52.561 fused_ordering(621) 00:08:52.561 fused_ordering(622) 00:08:52.561 fused_ordering(623) 00:08:52.561 fused_ordering(624) 00:08:52.561 fused_ordering(625) 00:08:52.561 fused_ordering(626) 00:08:52.561 fused_ordering(627) 00:08:52.561 fused_ordering(628) 00:08:52.561 fused_ordering(629) 00:08:52.561 fused_ordering(630) 00:08:52.561 fused_ordering(631) 00:08:52.561 fused_ordering(632) 00:08:52.561 fused_ordering(633) 00:08:52.561 fused_ordering(634) 00:08:52.561 fused_ordering(635) 00:08:52.561 fused_ordering(636) 00:08:52.561 fused_ordering(637) 00:08:52.561 fused_ordering(638) 00:08:52.561 fused_ordering(639) 00:08:52.561 fused_ordering(640) 00:08:52.561 fused_ordering(641) 00:08:52.561 fused_ordering(642) 00:08:52.561 fused_ordering(643) 00:08:52.561 fused_ordering(644) 00:08:52.561 fused_ordering(645) 00:08:52.561 fused_ordering(646) 00:08:52.561 fused_ordering(647) 00:08:52.561 fused_ordering(648) 00:08:52.561 fused_ordering(649) 00:08:52.562 fused_ordering(650) 00:08:52.562 fused_ordering(651) 00:08:52.562 fused_ordering(652) 00:08:52.562 fused_ordering(653) 00:08:52.562 fused_ordering(654) 00:08:52.562 fused_ordering(655) 00:08:52.562 fused_ordering(656) 00:08:52.562 fused_ordering(657) 00:08:52.562 fused_ordering(658) 00:08:52.562 fused_ordering(659) 00:08:52.562 fused_ordering(660) 00:08:52.562 fused_ordering(661) 00:08:52.562 fused_ordering(662) 00:08:52.562 fused_ordering(663) 00:08:52.562 fused_ordering(664) 00:08:52.562 fused_ordering(665) 00:08:52.562 fused_ordering(666) 00:08:52.562 fused_ordering(667) 00:08:52.562 fused_ordering(668) 00:08:52.562 fused_ordering(669) 00:08:52.562 fused_ordering(670) 00:08:52.562 fused_ordering(671) 00:08:52.562 fused_ordering(672) 00:08:52.562 fused_ordering(673) 00:08:52.562 fused_ordering(674) 00:08:52.562 fused_ordering(675) 00:08:52.562 fused_ordering(676) 00:08:52.562 fused_ordering(677) 00:08:52.562 fused_ordering(678) 00:08:52.562 fused_ordering(679) 00:08:52.562 fused_ordering(680) 00:08:52.562 fused_ordering(681) 00:08:52.562 fused_ordering(682) 00:08:52.562 fused_ordering(683) 00:08:52.562 fused_ordering(684) 00:08:52.562 fused_ordering(685) 00:08:52.562 fused_ordering(686) 00:08:52.562 fused_ordering(687) 00:08:52.562 fused_ordering(688) 00:08:52.562 fused_ordering(689) 00:08:52.562 fused_ordering(690) 00:08:52.562 fused_ordering(691) 00:08:52.562 fused_ordering(692) 00:08:52.562 fused_ordering(693) 00:08:52.562 fused_ordering(694) 00:08:52.562 fused_ordering(695) 00:08:52.562 fused_ordering(696) 00:08:52.562 fused_ordering(697) 00:08:52.562 fused_ordering(698) 00:08:52.562 fused_ordering(699) 00:08:52.562 fused_ordering(700) 00:08:52.562 fused_ordering(701) 00:08:52.562 fused_ordering(702) 00:08:52.562 fused_ordering(703) 00:08:52.562 fused_ordering(704) 00:08:52.562 fused_ordering(705) 00:08:52.562 fused_ordering(706) 00:08:52.562 fused_ordering(707) 00:08:52.562 fused_ordering(708) 00:08:52.562 fused_ordering(709) 00:08:52.562 fused_ordering(710) 00:08:52.562 fused_ordering(711) 00:08:52.562 fused_ordering(712) 00:08:52.562 fused_ordering(713) 00:08:52.562 fused_ordering(714) 00:08:52.562 fused_ordering(715) 00:08:52.562 fused_ordering(716) 00:08:52.562 fused_ordering(717) 00:08:52.562 fused_ordering(718) 00:08:52.562 fused_ordering(719) 00:08:52.562 fused_ordering(720) 00:08:52.562 fused_ordering(721) 00:08:52.562 fused_ordering(722) 00:08:52.562 fused_ordering(723) 00:08:52.562 fused_ordering(724) 00:08:52.562 fused_ordering(725) 00:08:52.562 fused_ordering(726) 00:08:52.562 fused_ordering(727) 00:08:52.562 fused_ordering(728) 00:08:52.562 fused_ordering(729) 00:08:52.562 fused_ordering(730) 00:08:52.562 fused_ordering(731) 00:08:52.562 fused_ordering(732) 00:08:52.562 fused_ordering(733) 00:08:52.562 fused_ordering(734) 00:08:52.562 fused_ordering(735) 00:08:52.562 fused_ordering(736) 00:08:52.562 fused_ordering(737) 00:08:52.562 fused_ordering(738) 00:08:52.562 fused_ordering(739) 00:08:52.562 fused_ordering(740) 00:08:52.562 fused_ordering(741) 00:08:52.562 fused_ordering(742) 00:08:52.562 fused_ordering(743) 00:08:52.562 fused_ordering(744) 00:08:52.562 fused_ordering(745) 00:08:52.562 fused_ordering(746) 00:08:52.562 fused_ordering(747) 00:08:52.562 fused_ordering(748) 00:08:52.562 fused_ordering(749) 00:08:52.562 fused_ordering(750) 00:08:52.562 fused_ordering(751) 00:08:52.562 fused_ordering(752) 00:08:52.562 fused_ordering(753) 00:08:52.562 fused_ordering(754) 00:08:52.562 fused_ordering(755) 00:08:52.562 fused_ordering(756) 00:08:52.562 fused_ordering(757) 00:08:52.562 fused_ordering(758) 00:08:52.562 fused_ordering(759) 00:08:52.562 fused_ordering(760) 00:08:52.562 fused_ordering(761) 00:08:52.562 fused_ordering(762) 00:08:52.562 fused_ordering(763) 00:08:52.562 fused_ordering(764) 00:08:52.562 fused_ordering(765) 00:08:52.562 fused_ordering(766) 00:08:52.562 fused_ordering(767) 00:08:52.562 fused_ordering(768) 00:08:52.562 fused_ordering(769) 00:08:52.562 fused_ordering(770) 00:08:52.562 fused_ordering(771) 00:08:52.562 fused_ordering(772) 00:08:52.562 fused_ordering(773) 00:08:52.562 fused_ordering(774) 00:08:52.562 fused_ordering(775) 00:08:52.562 fused_ordering(776) 00:08:52.562 fused_ordering(777) 00:08:52.562 fused_ordering(778) 00:08:52.562 fused_ordering(779) 00:08:52.562 fused_ordering(780) 00:08:52.562 fused_ordering(781) 00:08:52.562 fused_ordering(782) 00:08:52.562 fused_ordering(783) 00:08:52.562 fused_ordering(784) 00:08:52.562 fused_ordering(785) 00:08:52.562 fused_ordering(786) 00:08:52.562 fused_ordering(787) 00:08:52.562 fused_ordering(788) 00:08:52.562 fused_ordering(789) 00:08:52.562 fused_ordering(790) 00:08:52.562 fused_ordering(791) 00:08:52.562 fused_ordering(792) 00:08:52.562 fused_ordering(793) 00:08:52.562 fused_ordering(794) 00:08:52.562 fused_ordering(795) 00:08:52.562 fused_ordering(796) 00:08:52.562 fused_ordering(797) 00:08:52.562 fused_ordering(798) 00:08:52.562 fused_ordering(799) 00:08:52.562 fused_ordering(800) 00:08:52.562 fused_ordering(801) 00:08:52.562 fused_ordering(802) 00:08:52.562 fused_ordering(803) 00:08:52.562 fused_ordering(804) 00:08:52.562 fused_ordering(805) 00:08:52.562 fused_ordering(806) 00:08:52.562 fused_ordering(807) 00:08:52.562 fused_ordering(808) 00:08:52.562 fused_ordering(809) 00:08:52.562 fused_ordering(810) 00:08:52.562 fused_ordering(811) 00:08:52.562 fused_ordering(812) 00:08:52.562 fused_ordering(813) 00:08:52.562 fused_ordering(814) 00:08:52.562 fused_ordering(815) 00:08:52.562 fused_ordering(816) 00:08:52.562 fused_ordering(817) 00:08:52.562 fused_ordering(818) 00:08:52.562 fused_ordering(819) 00:08:52.562 fused_ordering(820) 00:08:53.129 fused_ordering(821) 00:08:53.129 fused_ordering(822) 00:08:53.129 fused_ordering(823) 00:08:53.129 fused_ordering(824) 00:08:53.129 fused_ordering(825) 00:08:53.129 fused_ordering(826) 00:08:53.129 fused_ordering(827) 00:08:53.129 fused_ordering(828) 00:08:53.129 fused_ordering(829) 00:08:53.129 fused_ordering(830) 00:08:53.129 fused_ordering(831) 00:08:53.129 fused_ordering(832) 00:08:53.129 fused_ordering(833) 00:08:53.129 fused_ordering(834) 00:08:53.129 fused_ordering(835) 00:08:53.129 fused_ordering(836) 00:08:53.129 fused_ordering(837) 00:08:53.129 fused_ordering(838) 00:08:53.129 fused_ordering(839) 00:08:53.129 fused_ordering(840) 00:08:53.129 fused_ordering(841) 00:08:53.129 fused_ordering(842) 00:08:53.129 fused_ordering(843) 00:08:53.129 fused_ordering(844) 00:08:53.129 fused_ordering(845) 00:08:53.129 fused_ordering(846) 00:08:53.129 fused_ordering(847) 00:08:53.129 fused_ordering(848) 00:08:53.129 fused_ordering(849) 00:08:53.129 fused_ordering(850) 00:08:53.129 fused_ordering(851) 00:08:53.129 fused_ordering(852) 00:08:53.129 fused_ordering(853) 00:08:53.129 fused_ordering(854) 00:08:53.129 fused_ordering(855) 00:08:53.129 fused_ordering(856) 00:08:53.129 fused_ordering(857) 00:08:53.129 fused_ordering(858) 00:08:53.129 fused_ordering(859) 00:08:53.129 fused_ordering(860) 00:08:53.129 fused_ordering(861) 00:08:53.129 fused_ordering(862) 00:08:53.129 fused_ordering(863) 00:08:53.129 fused_ordering(864) 00:08:53.129 fused_ordering(865) 00:08:53.129 fused_ordering(866) 00:08:53.129 fused_ordering(867) 00:08:53.129 fused_ordering(868) 00:08:53.129 fused_ordering(869) 00:08:53.129 fused_ordering(870) 00:08:53.129 fused_ordering(871) 00:08:53.129 fused_ordering(872) 00:08:53.129 fused_ordering(873) 00:08:53.129 fused_ordering(874) 00:08:53.129 fused_ordering(875) 00:08:53.129 fused_ordering(876) 00:08:53.129 fused_ordering(877) 00:08:53.129 fused_ordering(878) 00:08:53.129 fused_ordering(879) 00:08:53.129 fused_ordering(880) 00:08:53.129 fused_ordering(881) 00:08:53.129 fused_ordering(882) 00:08:53.129 fused_ordering(883) 00:08:53.129 fused_ordering(884) 00:08:53.129 fused_ordering(885) 00:08:53.129 fused_ordering(886) 00:08:53.129 fused_ordering(887) 00:08:53.129 fused_ordering(888) 00:08:53.129 fused_ordering(889) 00:08:53.129 fused_ordering(890) 00:08:53.129 fused_ordering(891) 00:08:53.129 fused_ordering(892) 00:08:53.129 fused_ordering(893) 00:08:53.129 fused_ordering(894) 00:08:53.129 fused_ordering(895) 00:08:53.129 fused_ordering(896) 00:08:53.129 fused_ordering(897) 00:08:53.129 fused_ordering(898) 00:08:53.129 fused_ordering(899) 00:08:53.129 fused_ordering(900) 00:08:53.129 fused_ordering(901) 00:08:53.129 fused_ordering(902) 00:08:53.129 fused_ordering(903) 00:08:53.129 fused_ordering(904) 00:08:53.130 fused_ordering(905) 00:08:53.130 fused_ordering(906) 00:08:53.130 fused_ordering(907) 00:08:53.130 fused_ordering(908) 00:08:53.130 fused_ordering(909) 00:08:53.130 fused_ordering(910) 00:08:53.130 fused_ordering(911) 00:08:53.130 fused_ordering(912) 00:08:53.130 fused_ordering(913) 00:08:53.130 fused_ordering(914) 00:08:53.130 fused_ordering(915) 00:08:53.130 fused_ordering(916) 00:08:53.130 fused_ordering(917) 00:08:53.130 fused_ordering(918) 00:08:53.130 fused_ordering(919) 00:08:53.130 fused_ordering(920) 00:08:53.130 fused_ordering(921) 00:08:53.130 fused_ordering(922) 00:08:53.130 fused_ordering(923) 00:08:53.130 fused_ordering(924) 00:08:53.130 fused_ordering(925) 00:08:53.130 fused_ordering(926) 00:08:53.130 fused_ordering(927) 00:08:53.130 fused_ordering(928) 00:08:53.130 fused_ordering(929) 00:08:53.130 fused_ordering(930) 00:08:53.130 fused_ordering(931) 00:08:53.130 fused_ordering(932) 00:08:53.130 fused_ordering(933) 00:08:53.130 fused_ordering(934) 00:08:53.130 fused_ordering(935) 00:08:53.130 fused_ordering(936) 00:08:53.130 fused_ordering(937) 00:08:53.130 fused_ordering(938) 00:08:53.130 fused_ordering(939) 00:08:53.130 fused_ordering(940) 00:08:53.130 fused_ordering(941) 00:08:53.130 fused_ordering(942) 00:08:53.130 fused_ordering(943) 00:08:53.130 fused_ordering(944) 00:08:53.130 fused_ordering(945) 00:08:53.130 fused_ordering(946) 00:08:53.130 fused_ordering(947) 00:08:53.130 fused_ordering(948) 00:08:53.130 fused_ordering(949) 00:08:53.130 fused_ordering(950) 00:08:53.130 fused_ordering(951) 00:08:53.130 fused_ordering(952) 00:08:53.130 fused_ordering(953) 00:08:53.130 fused_ordering(954) 00:08:53.130 fused_ordering(955) 00:08:53.130 fused_ordering(956) 00:08:53.130 fused_ordering(957) 00:08:53.130 fused_ordering(958) 00:08:53.130 fused_ordering(959) 00:08:53.130 fused_ordering(960) 00:08:53.130 fused_ordering(961) 00:08:53.130 fused_ordering(962) 00:08:53.130 fused_ordering(963) 00:08:53.130 fused_ordering(964) 00:08:53.130 fused_ordering(965) 00:08:53.130 fused_ordering(966) 00:08:53.130 fused_ordering(967) 00:08:53.130 fused_ordering(968) 00:08:53.130 fused_ordering(969) 00:08:53.130 fused_ordering(970) 00:08:53.130 fused_ordering(971) 00:08:53.130 fused_ordering(972) 00:08:53.130 fused_ordering(973) 00:08:53.130 fused_ordering(974) 00:08:53.130 fused_ordering(975) 00:08:53.130 fused_ordering(976) 00:08:53.130 fused_ordering(977) 00:08:53.130 fused_ordering(978) 00:08:53.130 fused_ordering(979) 00:08:53.130 fused_ordering(980) 00:08:53.130 fused_ordering(981) 00:08:53.130 fused_ordering(982) 00:08:53.130 fused_ordering(983) 00:08:53.130 fused_ordering(984) 00:08:53.130 fused_ordering(985) 00:08:53.130 fused_ordering(986) 00:08:53.130 fused_ordering(987) 00:08:53.130 fused_ordering(988) 00:08:53.130 fused_ordering(989) 00:08:53.130 fused_ordering(990) 00:08:53.130 fused_ordering(991) 00:08:53.130 fused_ordering(992) 00:08:53.130 fused_ordering(993) 00:08:53.130 fused_ordering(994) 00:08:53.130 fused_ordering(995) 00:08:53.130 fused_ordering(996) 00:08:53.130 fused_ordering(997) 00:08:53.130 fused_ordering(998) 00:08:53.130 fused_ordering(999) 00:08:53.130 fused_ordering(1000) 00:08:53.130 fused_ordering(1001) 00:08:53.130 fused_ordering(1002) 00:08:53.130 fused_ordering(1003) 00:08:53.130 fused_ordering(1004) 00:08:53.130 fused_ordering(1005) 00:08:53.130 fused_ordering(1006) 00:08:53.130 fused_ordering(1007) 00:08:53.130 fused_ordering(1008) 00:08:53.130 fused_ordering(1009) 00:08:53.130 fused_ordering(1010) 00:08:53.130 fused_ordering(1011) 00:08:53.130 fused_ordering(1012) 00:08:53.130 fused_ordering(1013) 00:08:53.130 fused_ordering(1014) 00:08:53.130 fused_ordering(1015) 00:08:53.130 fused_ordering(1016) 00:08:53.130 fused_ordering(1017) 00:08:53.130 fused_ordering(1018) 00:08:53.130 fused_ordering(1019) 00:08:53.130 fused_ordering(1020) 00:08:53.130 fused_ordering(1021) 00:08:53.130 fused_ordering(1022) 00:08:53.130 fused_ordering(1023) 00:08:53.130 16:01:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:53.130 16:01:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:53.130 16:01:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:53.130 16:01:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:53.130 16:01:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:53.130 16:01:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:53.130 16:01:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:53.130 16:01:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:53.130 rmmod nvme_tcp 00:08:53.130 rmmod nvme_fabrics 00:08:53.130 rmmod nvme_keyring 00:08:53.130 16:01:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.130 16:01:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:53.130 16:01:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:53.130 16:01:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 716677 ']' 00:08:53.130 16:01:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 716677 00:08:53.130 16:01:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 716677 ']' 00:08:53.130 16:01:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 716677 00:08:53.130 16:01:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:08:53.130 16:01:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:53.130 16:01:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 716677 00:08:53.130 16:01:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:53.130 16:01:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:53.130 16:01:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 716677' 00:08:53.130 killing process with pid 716677 00:08:53.130 16:01:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 716677 00:08:53.130 16:01:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 716677 00:08:53.389 16:01:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:53.389 16:01:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:53.389 16:01:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:53.389 16:01:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.389 16:01:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.389 16:01:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.389 16:01:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.389 16:01:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.925 16:01:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:55.925 00:08:55.925 real 0m7.427s 00:08:55.925 user 0m4.922s 00:08:55.925 sys 0m3.132s 00:08:55.925 16:01:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.925 16:01:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:55.925 ************************************ 00:08:55.925 END TEST nvmf_fused_ordering 00:08:55.925 ************************************ 00:08:55.925 16:01:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:55.925 16:01:41 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:55.925 16:01:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:55.925 16:01:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.925 16:01:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:55.925 ************************************ 00:08:55.925 START TEST nvmf_delete_subsystem 00:08:55.925 ************************************ 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:55.925 * Looking for test storage... 00:08:55.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:55.925 16:01:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:57.828 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:57.828 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.828 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:57.829 Found net devices under 0000:09:00.0: cvl_0_0 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:57.829 Found net devices under 0000:09:00.1: cvl_0_1 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:57.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:08:57.829 00:08:57.829 --- 10.0.0.2 ping statistics --- 00:08:57.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.829 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:08:57.829 00:08:57.829 --- 10.0.0.1 ping statistics --- 00:08:57.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.829 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=719022 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 719022 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 719022 ']' 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.829 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:57.829 [2024-07-15 16:01:43.703503] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:08:57.829 [2024-07-15 16:01:43.703594] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.829 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.829 [2024-07-15 16:01:43.769313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:58.086 [2024-07-15 16:01:43.878938] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.086 [2024-07-15 16:01:43.879012] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.086 [2024-07-15 16:01:43.879027] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.086 [2024-07-15 16:01:43.879038] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.086 [2024-07-15 16:01:43.879048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.086 [2024-07-15 16:01:43.879100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.086 [2024-07-15 16:01:43.879104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.086 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.086 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:58.086 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.086 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:58.086 16:01:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:58.086 [2024-07-15 16:01:44.023213] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:58.086 [2024-07-15 16:01:44.039460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:58.086 NULL1 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:58.086 Delay0 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=719044 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:58.086 16:01:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:58.344 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.345 [2024-07-15 16:01:44.114159] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:00.251 16:01:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.251 16:01:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.251 16:01:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 [2024-07-15 16:01:46.154773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdbcc00d2f0 is same with the state(5) to be set 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 [2024-07-15 16:01:46.155632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdbcc000c00 is same with the state(5) to be set 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 starting I/O failed: -6 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.251 Write completed with error (sct=0, sc=8) 00:09:00.251 Read completed with error (sct=0, sc=8) 00:09:00.252 starting I/O failed: -6 00:09:00.252 [2024-07-15 16:01:46.156145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbde3e0 is same with the state(5) to be set 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:00.252 Write completed with error (sct=0, sc=8) 00:09:00.252 Read completed with error (sct=0, sc=8) 00:09:01.190 [2024-07-15 16:01:47.131409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdfac0 is same with the state(5) to be set 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 [2024-07-15 16:01:47.155102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbde5c0 is same with the state(5) to be set 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 [2024-07-15 16:01:47.155299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbde980 is same with the state(5) to be set 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 [2024-07-15 16:01:47.158123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdbcc00cfe0 is same with the state(5) to be set 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Write completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 Read completed with error (sct=0, sc=8) 00:09:01.190 [2024-07-15 16:01:47.159480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdbcc00d600 is same with the state(5) to be set 00:09:01.190 Initializing NVMe Controllers 00:09:01.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:01.190 Controller IO queue size 128, less than required. 00:09:01.190 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:01.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:01.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:01.190 Initialization complete. Launching workers. 00:09:01.190 ======================================================== 00:09:01.190 Latency(us) 00:09:01.190 Device Information : IOPS MiB/s Average min max 00:09:01.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.74 0.08 908551.02 376.55 1012440.81 00:09:01.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.84 0.07 940356.86 642.65 1011238.01 00:09:01.190 ======================================================== 00:09:01.190 Total : 314.58 0.15 923801.77 376.55 1012440.81 00:09:01.190 00:09:01.190 [2024-07-15 16:01:47.159897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdfac0 (9): Bad file descriptor 00:09:01.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:01.190 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.190 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:01.190 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 719044 00:09:01.190 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 719044 00:09:01.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (719044) - No such process 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 719044 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 719044 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 719044 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.761 [2024-07-15 16:01:47.683234] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=719453 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719453 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:01.761 16:01:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:01.761 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.761 [2024-07-15 16:01:47.745785] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:02.328 16:01:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:02.328 16:01:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719453 00:09:02.328 16:01:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:02.895 16:01:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:02.895 16:01:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719453 00:09:02.895 16:01:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:03.464 16:01:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:03.464 16:01:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719453 00:09:03.464 16:01:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:03.723 16:01:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:03.723 16:01:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719453 00:09:03.723 16:01:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:04.291 16:01:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:04.291 16:01:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719453 00:09:04.291 16:01:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:04.861 16:01:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:04.861 16:01:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719453 00:09:04.861 16:01:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:05.121 Initializing NVMe Controllers 00:09:05.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:05.121 Controller IO queue size 128, less than required. 00:09:05.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:05.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:05.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:05.121 Initialization complete. Launching workers. 00:09:05.121 ======================================================== 00:09:05.121 Latency(us) 00:09:05.121 Device Information : IOPS MiB/s Average min max 00:09:05.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004401.42 1000169.02 1011485.00 00:09:05.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005357.53 1000182.26 1043907.43 00:09:05.121 ======================================================== 00:09:05.121 Total : 256.00 0.12 1004879.47 1000169.02 1043907.43 00:09:05.121 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719453 00:09:05.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (719453) - No such process 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 719453 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:05.402 rmmod nvme_tcp 00:09:05.402 rmmod nvme_fabrics 00:09:05.402 rmmod nvme_keyring 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 719022 ']' 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 719022 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 719022 ']' 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 719022 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 719022 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 719022' 00:09:05.402 killing process with pid 719022 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 719022 00:09:05.402 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 719022 00:09:05.664 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:05.664 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:05.664 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:05.664 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:05.664 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:05.664 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.664 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.664 16:01:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.202 16:01:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:08.202 00:09:08.202 real 0m12.201s 00:09:08.202 user 0m27.505s 00:09:08.202 sys 0m2.952s 00:09:08.202 16:01:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.202 16:01:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.202 ************************************ 00:09:08.202 END TEST nvmf_delete_subsystem 00:09:08.202 ************************************ 00:09:08.202 16:01:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:08.202 16:01:53 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:08.202 16:01:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:08.202 16:01:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.202 16:01:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:08.202 ************************************ 00:09:08.202 START TEST nvmf_ns_masking 00:09:08.202 ************************************ 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:08.202 * Looking for test storage... 00:09:08.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:08.202 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=35f0baf3-b1f4-4c3e-9e1b-56a3faa4d31f 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=099ad1e1-4f12-4750-a0d5-c8e6f00701bd 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=d0941447-5d13-4b25-94c8-b76d28584ed0 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:08.203 16:01:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:10.109 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:10.109 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:10.109 Found net devices under 0000:09:00.0: cvl_0_0 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:10.109 Found net devices under 0000:09:00.1: cvl_0_1 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:10.109 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:10.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:09:10.110 00:09:10.110 --- 10.0.0.2 ping statistics --- 00:09:10.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.110 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:09:10.110 00:09:10.110 --- 10.0.0.1 ping statistics --- 00:09:10.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.110 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=721799 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 721799 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 721799 ']' 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.110 16:01:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:10.110 [2024-07-15 16:01:55.799872] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:10.110 [2024-07-15 16:01:55.799940] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.110 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.110 [2024-07-15 16:01:55.861148] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.110 [2024-07-15 16:01:55.964100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.110 [2024-07-15 16:01:55.964156] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.110 [2024-07-15 16:01:55.964171] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.110 [2024-07-15 16:01:55.964183] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.110 [2024-07-15 16:01:55.964194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.110 [2024-07-15 16:01:55.964240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.110 16:01:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.110 16:01:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:10.110 16:01:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:10.110 16:01:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:10.110 16:01:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:10.110 16:01:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.110 16:01:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:10.368 [2024-07-15 16:01:56.330764] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.368 16:01:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:10.368 16:01:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:10.368 16:01:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:10.625 Malloc1 00:09:10.625 16:01:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:11.191 Malloc2 00:09:11.191 16:01:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:11.448 16:01:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:11.705 16:01:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.705 [2024-07-15 16:01:57.704767] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.965 16:01:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:11.965 16:01:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d0941447-5d13-4b25-94c8-b76d28584ed0 -a 10.0.0.2 -s 4420 -i 4 00:09:11.965 16:01:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.965 16:01:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:11.965 16:01:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.965 16:01:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:11.965 16:01:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:14.503 16:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:14.503 16:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:14.503 16:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:14.503 16:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:14.503 16:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:14.503 16:01:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:14.503 16:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:14.503 16:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:14.503 16:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:14.503 16:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:14.503 16:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:14.503 16:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:14.503 16:01:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:14.503 [ 0]:0x1 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dec210f5481f421ca1d47f6968d56059 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dec210f5481f421ca1d47f6968d56059 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:14.503 [ 0]:0x1 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dec210f5481f421ca1d47f6968d56059 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dec210f5481f421ca1d47f6968d56059 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:14.503 [ 1]:0x2 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e318774af6742faa3502a7226689892 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e318774af6742faa3502a7226689892 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:14.503 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.761 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.019 16:02:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:15.279 16:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:15.279 16:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d0941447-5d13-4b25-94c8-b76d28584ed0 -a 10.0.0.2 -s 4420 -i 4 00:09:15.279 16:02:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:15.279 16:02:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:15.279 16:02:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.279 16:02:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:15.279 16:02:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:15.279 16:02:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:17.183 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:17.183 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:17.183 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:17.440 [ 0]:0x2 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e318774af6742faa3502a7226689892 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e318774af6742faa3502a7226689892 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.440 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:17.698 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:17.698 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:17.698 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:17.698 [ 0]:0x1 00:09:17.698 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:17.698 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:17.698 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dec210f5481f421ca1d47f6968d56059 00:09:17.698 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dec210f5481f421ca1d47f6968d56059 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.698 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:17.698 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:17.698 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:17.698 [ 1]:0x2 00:09:17.698 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:17.698 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:17.698 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e318774af6742faa3502a7226689892 00:09:17.698 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e318774af6742faa3502a7226689892 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.698 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:18.263 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:18.263 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:18.263 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:18.263 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:18.263 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:18.263 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:18.263 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:18.263 16:02:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:18.263 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:18.263 16:02:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:18.263 [ 0]:0x2 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e318774af6742faa3502a7226689892 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e318774af6742faa3502a7226689892 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:18.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.263 16:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:18.521 16:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:18.521 16:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d0941447-5d13-4b25-94c8-b76d28584ed0 -a 10.0.0.2 -s 4420 -i 4 00:09:18.521 16:02:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:18.521 16:02:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:18.522 16:02:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.522 16:02:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:18.522 16:02:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:18.522 16:02:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:21.054 [ 0]:0x1 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dec210f5481f421ca1d47f6968d56059 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dec210f5481f421ca1d47f6968d56059 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:21.054 [ 1]:0x2 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e318774af6742faa3502a7226689892 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e318774af6742faa3502a7226689892 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:21.054 16:02:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:21.054 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:21.054 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:21.054 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:21.054 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:21.054 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.054 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:21.054 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.054 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:21.054 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:21.054 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:21.054 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:21.054 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:21.313 [ 0]:0x2 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e318774af6742faa3502a7226689892 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e318774af6742faa3502a7226689892 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:21.313 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:21.571 [2024-07-15 16:02:07.377361] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:21.571 request: 00:09:21.571 { 00:09:21.571 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:21.571 "nsid": 2, 00:09:21.571 "host": "nqn.2016-06.io.spdk:host1", 00:09:21.571 "method": "nvmf_ns_remove_host", 00:09:21.571 "req_id": 1 00:09:21.571 } 00:09:21.571 Got JSON-RPC error response 00:09:21.571 response: 00:09:21.571 { 00:09:21.571 "code": -32602, 00:09:21.571 "message": "Invalid parameters" 00:09:21.571 } 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:21.571 [ 0]:0x2 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e318774af6742faa3502a7226689892 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e318774af6742faa3502a7226689892 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:21.571 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:21.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.829 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=723426 00:09:21.829 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:21.829 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.829 16:02:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 723426 /var/tmp/host.sock 00:09:21.829 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 723426 ']' 00:09:21.829 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:21.829 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.829 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:21.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:21.829 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.829 16:02:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:21.829 [2024-07-15 16:02:07.719533] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:21.829 [2024-07-15 16:02:07.719623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723426 ] 00:09:21.829 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.829 [2024-07-15 16:02:07.777524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.087 [2024-07-15 16:02:07.883168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.345 16:02:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.345 16:02:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:22.345 16:02:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.605 16:02:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:22.864 16:02:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 35f0baf3-b1f4-4c3e-9e1b-56a3faa4d31f 00:09:22.864 16:02:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:22.864 16:02:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 35F0BAF3B1F44C3E9E1B56A3FAA4D31F -i 00:09:23.123 16:02:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 099ad1e1-4f12-4750-a0d5-c8e6f00701bd 00:09:23.124 16:02:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:23.124 16:02:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 099AD1E14F124750A0D5C8E6F00701BD -i 00:09:23.124 16:02:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:23.382 16:02:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:23.641 16:02:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:23.641 16:02:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:24.209 nvme0n1 00:09:24.209 16:02:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:24.209 16:02:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:24.782 nvme1n2 00:09:24.782 16:02:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:24.782 16:02:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:24.782 16:02:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:24.782 16:02:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:24.782 16:02:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:24.782 16:02:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:24.782 16:02:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:24.782 16:02:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:24.782 16:02:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:25.047 16:02:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 35f0baf3-b1f4-4c3e-9e1b-56a3faa4d31f == \3\5\f\0\b\a\f\3\-\b\1\f\4\-\4\c\3\e\-\9\e\1\b\-\5\6\a\3\f\a\a\4\d\3\1\f ]] 00:09:25.047 16:02:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:25.047 16:02:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:25.047 16:02:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:25.306 16:02:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 099ad1e1-4f12-4750-a0d5-c8e6f00701bd == \0\9\9\a\d\1\e\1\-\4\f\1\2\-\4\7\5\0\-\a\0\d\5\-\c\8\e\6\f\0\0\7\0\1\b\d ]] 00:09:25.306 16:02:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 723426 00:09:25.306 16:02:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 723426 ']' 00:09:25.306 16:02:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 723426 00:09:25.306 16:02:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:25.306 16:02:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:25.306 16:02:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 723426 00:09:25.306 16:02:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:25.306 16:02:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:25.306 16:02:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 723426' 00:09:25.306 killing process with pid 723426 00:09:25.306 16:02:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 723426 00:09:25.306 16:02:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 723426 00:09:25.875 16:02:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:26.135 rmmod nvme_tcp 00:09:26.135 rmmod nvme_fabrics 00:09:26.135 rmmod nvme_keyring 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 721799 ']' 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 721799 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 721799 ']' 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 721799 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 721799 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 721799' 00:09:26.135 killing process with pid 721799 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 721799 00:09:26.135 16:02:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 721799 00:09:26.705 16:02:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:26.705 16:02:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:26.706 16:02:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:26.706 16:02:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:26.706 16:02:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:26.706 16:02:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.706 16:02:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:26.706 16:02:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.612 16:02:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:28.612 00:09:28.612 real 0m20.814s 00:09:28.612 user 0m27.124s 00:09:28.612 sys 0m3.985s 00:09:28.612 16:02:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.612 16:02:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:28.612 ************************************ 00:09:28.612 END TEST nvmf_ns_masking 00:09:28.612 ************************************ 00:09:28.612 16:02:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:28.612 16:02:14 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:28.612 16:02:14 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:28.612 16:02:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:28.612 16:02:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.612 16:02:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.612 ************************************ 00:09:28.612 START TEST nvmf_nvme_cli 00:09:28.612 ************************************ 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:28.612 * Looking for test storage... 00:09:28.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.612 16:02:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:28.613 16:02:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:31.145 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:31.145 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.145 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:31.146 Found net devices under 0000:09:00.0: cvl_0_0 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:31.146 Found net devices under 0000:09:00.1: cvl_0_1 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:31.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:09:31.146 00:09:31.146 --- 10.0.0.2 ping statistics --- 00:09:31.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.146 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:09:31.146 00:09:31.146 --- 10.0.0.1 ping statistics --- 00:09:31.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.146 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=725920 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 725920 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 725920 ']' 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.146 16:02:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:31.146 [2024-07-15 16:02:16.776311] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:31.146 [2024-07-15 16:02:16.776391] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.146 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.146 [2024-07-15 16:02:16.841362] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.146 [2024-07-15 16:02:16.952154] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.146 [2024-07-15 16:02:16.952219] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.146 [2024-07-15 16:02:16.952233] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.146 [2024-07-15 16:02:16.952244] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.146 [2024-07-15 16:02:16.952254] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.146 [2024-07-15 16:02:16.952315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.146 [2024-07-15 16:02:16.952373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.146 [2024-07-15 16:02:16.952404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.146 [2024-07-15 16:02:16.952406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.146 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.146 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:09:31.146 16:02:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:31.146 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:31.146 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:31.146 16:02:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.146 16:02:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:31.146 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.146 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:31.146 [2024-07-15 16:02:17.116811] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.146 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.146 16:02:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:31.146 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.146 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:31.405 Malloc0 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:31.405 Malloc1 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:31.405 [2024-07-15 16:02:17.201968] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:09:31.405 00:09:31.405 Discovery Log Number of Records 2, Generation counter 2 00:09:31.405 =====Discovery Log Entry 0====== 00:09:31.405 trtype: tcp 00:09:31.405 adrfam: ipv4 00:09:31.405 subtype: current discovery subsystem 00:09:31.405 treq: not required 00:09:31.405 portid: 0 00:09:31.405 trsvcid: 4420 00:09:31.405 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:31.405 traddr: 10.0.0.2 00:09:31.405 eflags: explicit discovery connections, duplicate discovery information 00:09:31.405 sectype: none 00:09:31.405 =====Discovery Log Entry 1====== 00:09:31.405 trtype: tcp 00:09:31.405 adrfam: ipv4 00:09:31.405 subtype: nvme subsystem 00:09:31.405 treq: not required 00:09:31.405 portid: 0 00:09:31.405 trsvcid: 4420 00:09:31.405 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:31.405 traddr: 10.0.0.2 00:09:31.405 eflags: none 00:09:31.405 sectype: none 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:31.405 16:02:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:32.342 16:02:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:32.342 16:02:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:09:32.342 16:02:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:32.342 16:02:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:32.342 16:02:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:32.342 16:02:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:09:34.243 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:34.243 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:34.243 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:34.243 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:34.243 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:34.243 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:09:34.243 16:02:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:34.243 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:34.243 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:34.243 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:34.243 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:34.243 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:34.243 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:34.243 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:34.244 /dev/nvme0n1 ]] 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:34.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:34.244 rmmod nvme_tcp 00:09:34.244 rmmod nvme_fabrics 00:09:34.244 rmmod nvme_keyring 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 725920 ']' 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 725920 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 725920 ']' 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 725920 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 725920 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 725920' 00:09:34.244 killing process with pid 725920 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 725920 00:09:34.244 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 725920 00:09:34.812 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:34.812 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:34.812 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:34.812 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:34.812 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:34.812 16:02:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.812 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.812 16:02:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.717 16:02:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:36.717 00:09:36.717 real 0m8.091s 00:09:36.717 user 0m14.700s 00:09:36.717 sys 0m2.165s 00:09:36.717 16:02:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.717 16:02:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:36.717 ************************************ 00:09:36.717 END TEST nvmf_nvme_cli 00:09:36.717 ************************************ 00:09:36.717 16:02:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:36.717 16:02:22 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:36.717 16:02:22 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:36.717 16:02:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:36.717 16:02:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.717 16:02:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:36.717 ************************************ 00:09:36.717 START TEST nvmf_vfio_user 00:09:36.717 ************************************ 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:36.717 * Looking for test storage... 00:09:36.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=726729 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 726729' 00:09:36.717 Process pid: 726729 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 726729 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 726729 ']' 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.717 16:02:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:36.976 [2024-07-15 16:02:22.758150] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:36.977 [2024-07-15 16:02:22.758250] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.977 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.977 [2024-07-15 16:02:22.817215] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.977 [2024-07-15 16:02:22.923642] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.977 [2024-07-15 16:02:22.923697] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.977 [2024-07-15 16:02:22.923725] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.977 [2024-07-15 16:02:22.923737] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.977 [2024-07-15 16:02:22.923746] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.977 [2024-07-15 16:02:22.923839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.977 [2024-07-15 16:02:22.923974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.977 [2024-07-15 16:02:22.924023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.977 [2024-07-15 16:02:22.924028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.236 16:02:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.236 16:02:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:09:37.236 16:02:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:38.166 16:02:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:38.423 16:02:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:38.423 16:02:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:38.423 16:02:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:38.423 16:02:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:38.423 16:02:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:38.678 Malloc1 00:09:38.679 16:02:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:38.935 16:02:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:39.190 16:02:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:39.447 16:02:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:39.447 16:02:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:39.447 16:02:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:39.704 Malloc2 00:09:39.704 16:02:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:39.962 16:02:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:40.220 16:02:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:40.479 16:02:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:40.479 16:02:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:40.479 16:02:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:40.479 16:02:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:40.479 16:02:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:40.479 16:02:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:40.479 [2024-07-15 16:02:26.376146] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:40.479 [2024-07-15 16:02:26.376190] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727144 ] 00:09:40.479 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.479 [2024-07-15 16:02:26.411245] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:40.479 [2024-07-15 16:02:26.413772] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:40.479 [2024-07-15 16:02:26.413802] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc715018000 00:09:40.479 [2024-07-15 16:02:26.414772] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:40.479 [2024-07-15 16:02:26.415768] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:40.479 [2024-07-15 16:02:26.416770] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:40.479 [2024-07-15 16:02:26.417779] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:40.479 [2024-07-15 16:02:26.418780] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:40.479 [2024-07-15 16:02:26.419789] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:40.479 [2024-07-15 16:02:26.420794] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:40.479 [2024-07-15 16:02:26.421793] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:40.479 [2024-07-15 16:02:26.422800] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:40.479 [2024-07-15 16:02:26.422820] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc71500d000 00:09:40.479 [2024-07-15 16:02:26.423953] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:40.479 [2024-07-15 16:02:26.438587] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:40.479 [2024-07-15 16:02:26.438625] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:40.479 [2024-07-15 16:02:26.443934] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:40.479 [2024-07-15 16:02:26.444005] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:40.479 [2024-07-15 16:02:26.444094] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:40.479 [2024-07-15 16:02:26.444122] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:40.479 [2024-07-15 16:02:26.444132] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:40.479 [2024-07-15 16:02:26.444929] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:40.479 [2024-07-15 16:02:26.444949] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:40.479 [2024-07-15 16:02:26.444986] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:40.479 [2024-07-15 16:02:26.445931] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:40.479 [2024-07-15 16:02:26.445948] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:40.479 [2024-07-15 16:02:26.445982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:40.479 [2024-07-15 16:02:26.446949] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:40.479 [2024-07-15 16:02:26.446975] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:40.479 [2024-07-15 16:02:26.447940] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:40.479 [2024-07-15 16:02:26.447977] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:40.479 [2024-07-15 16:02:26.447993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:40.479 [2024-07-15 16:02:26.448010] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:40.479 [2024-07-15 16:02:26.448120] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:40.479 [2024-07-15 16:02:26.448129] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:40.479 [2024-07-15 16:02:26.448137] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:40.479 [2024-07-15 16:02:26.448971] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:40.479 [2024-07-15 16:02:26.449966] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:40.479 [2024-07-15 16:02:26.450974] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:40.479 [2024-07-15 16:02:26.451971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:40.479 [2024-07-15 16:02:26.452126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:40.479 [2024-07-15 16:02:26.452988] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:40.479 [2024-07-15 16:02:26.453023] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:40.480 [2024-07-15 16:02:26.453033] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453057] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:40.480 [2024-07-15 16:02:26.453075] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453100] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:40.480 [2024-07-15 16:02:26.453110] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:40.480 [2024-07-15 16:02:26.453129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:40.480 [2024-07-15 16:02:26.453194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:40.480 [2024-07-15 16:02:26.453211] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:40.480 [2024-07-15 16:02:26.453225] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:40.480 [2024-07-15 16:02:26.453234] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:40.480 [2024-07-15 16:02:26.453257] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:40.480 [2024-07-15 16:02:26.453269] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:40.480 [2024-07-15 16:02:26.453277] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:40.480 [2024-07-15 16:02:26.453288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453332] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:40.480 [2024-07-15 16:02:26.453347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:40.480 [2024-07-15 16:02:26.453369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:40.480 [2024-07-15 16:02:26.453383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:40.480 [2024-07-15 16:02:26.453395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:40.480 [2024-07-15 16:02:26.453407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:40.480 [2024-07-15 16:02:26.453415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453443] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:40.480 [2024-07-15 16:02:26.453456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:40.480 [2024-07-15 16:02:26.453466] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:40.480 [2024-07-15 16:02:26.453474] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453484] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453494] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453506] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:40.480 [2024-07-15 16:02:26.453518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:40.480 [2024-07-15 16:02:26.453579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453593] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453606] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:40.480 [2024-07-15 16:02:26.453615] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:40.480 [2024-07-15 16:02:26.453624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:40.480 [2024-07-15 16:02:26.453642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:40.480 [2024-07-15 16:02:26.453659] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:40.480 [2024-07-15 16:02:26.453677] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453692] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453704] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:40.480 [2024-07-15 16:02:26.453712] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:40.480 [2024-07-15 16:02:26.453721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:40.480 [2024-07-15 16:02:26.453743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:40.480 [2024-07-15 16:02:26.453764] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453791] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:40.480 [2024-07-15 16:02:26.453800] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:40.480 [2024-07-15 16:02:26.453809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:40.480 [2024-07-15 16:02:26.453820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:40.480 [2024-07-15 16:02:26.453834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453845] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453868] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453876] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453884] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453892] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:40.480 [2024-07-15 16:02:26.453899] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:40.480 [2024-07-15 16:02:26.453908] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:40.480 [2024-07-15 16:02:26.453947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:40.480 [2024-07-15 16:02:26.453977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:40.480 [2024-07-15 16:02:26.454024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:40.480 [2024-07-15 16:02:26.454039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:40.480 [2024-07-15 16:02:26.454060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:40.480 [2024-07-15 16:02:26.454076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:40.480 [2024-07-15 16:02:26.454094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:40.480 [2024-07-15 16:02:26.454106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:40.480 [2024-07-15 16:02:26.454129] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:40.480 [2024-07-15 16:02:26.454140] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:40.480 [2024-07-15 16:02:26.454147] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:40.480 [2024-07-15 16:02:26.454153] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:40.480 [2024-07-15 16:02:26.454163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:40.480 [2024-07-15 16:02:26.454176] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:40.480 [2024-07-15 16:02:26.454184] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:40.480 [2024-07-15 16:02:26.454194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:40.480 [2024-07-15 16:02:26.454206] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:40.480 [2024-07-15 16:02:26.454214] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:40.480 [2024-07-15 16:02:26.454223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:40.480 [2024-07-15 16:02:26.454251] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:40.480 [2024-07-15 16:02:26.454260] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:40.480 [2024-07-15 16:02:26.454269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:40.480 [2024-07-15 16:02:26.454281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:40.480 [2024-07-15 16:02:26.454302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:40.480 [2024-07-15 16:02:26.454335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:40.480 [2024-07-15 16:02:26.454348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:40.480 ===================================================== 00:09:40.480 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:40.480 ===================================================== 00:09:40.480 Controller Capabilities/Features 00:09:40.480 ================================ 00:09:40.480 Vendor ID: 4e58 00:09:40.480 Subsystem Vendor ID: 4e58 00:09:40.480 Serial Number: SPDK1 00:09:40.480 Model Number: SPDK bdev Controller 00:09:40.480 Firmware Version: 24.09 00:09:40.480 Recommended Arb Burst: 6 00:09:40.480 IEEE OUI Identifier: 8d 6b 50 00:09:40.480 Multi-path I/O 00:09:40.480 May have multiple subsystem ports: Yes 00:09:40.480 May have multiple controllers: Yes 00:09:40.480 Associated with SR-IOV VF: No 00:09:40.480 Max Data Transfer Size: 131072 00:09:40.480 Max Number of Namespaces: 32 00:09:40.480 Max Number of I/O Queues: 127 00:09:40.480 NVMe Specification Version (VS): 1.3 00:09:40.480 NVMe Specification Version (Identify): 1.3 00:09:40.480 Maximum Queue Entries: 256 00:09:40.480 Contiguous Queues Required: Yes 00:09:40.480 Arbitration Mechanisms Supported 00:09:40.480 Weighted Round Robin: Not Supported 00:09:40.480 Vendor Specific: Not Supported 00:09:40.480 Reset Timeout: 15000 ms 00:09:40.480 Doorbell Stride: 4 bytes 00:09:40.480 NVM Subsystem Reset: Not Supported 00:09:40.480 Command Sets Supported 00:09:40.480 NVM Command Set: Supported 00:09:40.480 Boot Partition: Not Supported 00:09:40.480 Memory Page Size Minimum: 4096 bytes 00:09:40.480 Memory Page Size Maximum: 4096 bytes 00:09:40.480 Persistent Memory Region: Not Supported 00:09:40.480 Optional Asynchronous Events Supported 00:09:40.480 Namespace Attribute Notices: Supported 00:09:40.480 Firmware Activation Notices: Not Supported 00:09:40.480 ANA Change Notices: Not Supported 00:09:40.480 PLE Aggregate Log Change Notices: Not Supported 00:09:40.480 LBA Status Info Alert Notices: Not Supported 00:09:40.480 EGE Aggregate Log Change Notices: Not Supported 00:09:40.480 Normal NVM Subsystem Shutdown event: Not Supported 00:09:40.480 Zone Descriptor Change Notices: Not Supported 00:09:40.481 Discovery Log Change Notices: Not Supported 00:09:40.481 Controller Attributes 00:09:40.481 128-bit Host Identifier: Supported 00:09:40.481 Non-Operational Permissive Mode: Not Supported 00:09:40.481 NVM Sets: Not Supported 00:09:40.481 Read Recovery Levels: Not Supported 00:09:40.481 Endurance Groups: Not Supported 00:09:40.481 Predictable Latency Mode: Not Supported 00:09:40.481 Traffic Based Keep ALive: Not Supported 00:09:40.481 Namespace Granularity: Not Supported 00:09:40.481 SQ Associations: Not Supported 00:09:40.481 UUID List: Not Supported 00:09:40.481 Multi-Domain Subsystem: Not Supported 00:09:40.481 Fixed Capacity Management: Not Supported 00:09:40.481 Variable Capacity Management: Not Supported 00:09:40.481 Delete Endurance Group: Not Supported 00:09:40.481 Delete NVM Set: Not Supported 00:09:40.481 Extended LBA Formats Supported: Not Supported 00:09:40.481 Flexible Data Placement Supported: Not Supported 00:09:40.481 00:09:40.481 Controller Memory Buffer Support 00:09:40.481 ================================ 00:09:40.481 Supported: No 00:09:40.481 00:09:40.481 Persistent Memory Region Support 00:09:40.481 ================================ 00:09:40.481 Supported: No 00:09:40.481 00:09:40.481 Admin Command Set Attributes 00:09:40.481 ============================ 00:09:40.481 Security Send/Receive: Not Supported 00:09:40.481 Format NVM: Not Supported 00:09:40.481 Firmware Activate/Download: Not Supported 00:09:40.481 Namespace Management: Not Supported 00:09:40.481 Device Self-Test: Not Supported 00:09:40.481 Directives: Not Supported 00:09:40.481 NVMe-MI: Not Supported 00:09:40.481 Virtualization Management: Not Supported 00:09:40.481 Doorbell Buffer Config: Not Supported 00:09:40.481 Get LBA Status Capability: Not Supported 00:09:40.481 Command & Feature Lockdown Capability: Not Supported 00:09:40.481 Abort Command Limit: 4 00:09:40.481 Async Event Request Limit: 4 00:09:40.481 Number of Firmware Slots: N/A 00:09:40.481 Firmware Slot 1 Read-Only: N/A 00:09:40.481 Firmware Activation Without Reset: N/A 00:09:40.481 Multiple Update Detection Support: N/A 00:09:40.481 Firmware Update Granularity: No Information Provided 00:09:40.481 Per-Namespace SMART Log: No 00:09:40.481 Asymmetric Namespace Access Log Page: Not Supported 00:09:40.481 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:40.481 Command Effects Log Page: Supported 00:09:40.481 Get Log Page Extended Data: Supported 00:09:40.481 Telemetry Log Pages: Not Supported 00:09:40.481 Persistent Event Log Pages: Not Supported 00:09:40.481 Supported Log Pages Log Page: May Support 00:09:40.481 Commands Supported & Effects Log Page: Not Supported 00:09:40.481 Feature Identifiers & Effects Log Page:May Support 00:09:40.481 NVMe-MI Commands & Effects Log Page: May Support 00:09:40.481 Data Area 4 for Telemetry Log: Not Supported 00:09:40.481 Error Log Page Entries Supported: 128 00:09:40.481 Keep Alive: Supported 00:09:40.481 Keep Alive Granularity: 10000 ms 00:09:40.481 00:09:40.481 NVM Command Set Attributes 00:09:40.481 ========================== 00:09:40.481 Submission Queue Entry Size 00:09:40.481 Max: 64 00:09:40.481 Min: 64 00:09:40.481 Completion Queue Entry Size 00:09:40.481 Max: 16 00:09:40.481 Min: 16 00:09:40.481 Number of Namespaces: 32 00:09:40.481 Compare Command: Supported 00:09:40.481 Write Uncorrectable Command: Not Supported 00:09:40.481 Dataset Management Command: Supported 00:09:40.481 Write Zeroes Command: Supported 00:09:40.481 Set Features Save Field: Not Supported 00:09:40.481 Reservations: Not Supported 00:09:40.481 Timestamp: Not Supported 00:09:40.481 Copy: Supported 00:09:40.481 Volatile Write Cache: Present 00:09:40.481 Atomic Write Unit (Normal): 1 00:09:40.481 Atomic Write Unit (PFail): 1 00:09:40.481 Atomic Compare & Write Unit: 1 00:09:40.481 Fused Compare & Write: Supported 00:09:40.481 Scatter-Gather List 00:09:40.481 SGL Command Set: Supported (Dword aligned) 00:09:40.481 SGL Keyed: Not Supported 00:09:40.481 SGL Bit Bucket Descriptor: Not Supported 00:09:40.481 SGL Metadata Pointer: Not Supported 00:09:40.481 Oversized SGL: Not Supported 00:09:40.481 SGL Metadata Address: Not Supported 00:09:40.481 SGL Offset: Not Supported 00:09:40.481 Transport SGL Data Block: Not Supported 00:09:40.481 Replay Protected Memory Block: Not Supported 00:09:40.481 00:09:40.481 Firmware Slot Information 00:09:40.481 ========================= 00:09:40.481 Active slot: 1 00:09:40.481 Slot 1 Firmware Revision: 24.09 00:09:40.481 00:09:40.481 00:09:40.481 Commands Supported and Effects 00:09:40.481 ============================== 00:09:40.481 Admin Commands 00:09:40.481 -------------- 00:09:40.481 Get Log Page (02h): Supported 00:09:40.481 Identify (06h): Supported 00:09:40.481 Abort (08h): Supported 00:09:40.481 Set Features (09h): Supported 00:09:40.481 Get Features (0Ah): Supported 00:09:40.481 Asynchronous Event Request (0Ch): Supported 00:09:40.481 Keep Alive (18h): Supported 00:09:40.481 I/O Commands 00:09:40.481 ------------ 00:09:40.481 Flush (00h): Supported LBA-Change 00:09:40.481 Write (01h): Supported LBA-Change 00:09:40.481 Read (02h): Supported 00:09:40.481 Compare (05h): Supported 00:09:40.481 Write Zeroes (08h): Supported LBA-Change 00:09:40.481 Dataset Management (09h): Supported LBA-Change 00:09:40.481 Copy (19h): Supported LBA-Change 00:09:40.481 00:09:40.481 Error Log 00:09:40.481 ========= 00:09:40.481 00:09:40.481 Arbitration 00:09:40.481 =========== 00:09:40.481 Arbitration Burst: 1 00:09:40.481 00:09:40.481 Power Management 00:09:40.481 ================ 00:09:40.481 Number of Power States: 1 00:09:40.481 Current Power State: Power State #0 00:09:40.481 Power State #0: 00:09:40.481 Max Power: 0.00 W 00:09:40.481 Non-Operational State: Operational 00:09:40.481 Entry Latency: Not Reported 00:09:40.481 Exit Latency: Not Reported 00:09:40.481 Relative Read Throughput: 0 00:09:40.481 Relative Read Latency: 0 00:09:40.481 Relative Write Throughput: 0 00:09:40.481 Relative Write Latency: 0 00:09:40.481 Idle Power: Not Reported 00:09:40.481 Active Power: Not Reported 00:09:40.481 Non-Operational Permissive Mode: Not Supported 00:09:40.481 00:09:40.481 Health Information 00:09:40.481 ================== 00:09:40.481 Critical Warnings: 00:09:40.481 Available Spare Space: OK 00:09:40.481 Temperature: OK 00:09:40.481 Device Reliability: OK 00:09:40.481 Read Only: No 00:09:40.481 Volatile Memory Backup: OK 00:09:40.481 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:40.481 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:40.481 Available Spare: 0% 00:09:40.481 Available Sp[2024-07-15 16:02:26.454464] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:40.481 [2024-07-15 16:02:26.454481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:40.481 [2024-07-15 16:02:26.454523] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:40.481 [2024-07-15 16:02:26.454541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.481 [2024-07-15 16:02:26.454552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.481 [2024-07-15 16:02:26.454563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.481 [2024-07-15 16:02:26.454576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.481 [2024-07-15 16:02:26.458970] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:40.481 [2024-07-15 16:02:26.458994] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:40.481 [2024-07-15 16:02:26.460041] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:40.481 [2024-07-15 16:02:26.460122] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:40.481 [2024-07-15 16:02:26.460137] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:40.481 [2024-07-15 16:02:26.461048] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:40.481 [2024-07-15 16:02:26.461072] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:40.481 [2024-07-15 16:02:26.461127] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:40.481 [2024-07-15 16:02:26.463094] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:40.740 are Threshold: 0% 00:09:40.740 Life Percentage Used: 0% 00:09:40.740 Data Units Read: 0 00:09:40.740 Data Units Written: 0 00:09:40.740 Host Read Commands: 0 00:09:40.740 Host Write Commands: 0 00:09:40.740 Controller Busy Time: 0 minutes 00:09:40.740 Power Cycles: 0 00:09:40.740 Power On Hours: 0 hours 00:09:40.740 Unsafe Shutdowns: 0 00:09:40.740 Unrecoverable Media Errors: 0 00:09:40.740 Lifetime Error Log Entries: 0 00:09:40.740 Warning Temperature Time: 0 minutes 00:09:40.740 Critical Temperature Time: 0 minutes 00:09:40.740 00:09:40.740 Number of Queues 00:09:40.740 ================ 00:09:40.740 Number of I/O Submission Queues: 127 00:09:40.740 Number of I/O Completion Queues: 127 00:09:40.740 00:09:40.740 Active Namespaces 00:09:40.740 ================= 00:09:40.740 Namespace ID:1 00:09:40.740 Error Recovery Timeout: Unlimited 00:09:40.740 Command Set Identifier: NVM (00h) 00:09:40.740 Deallocate: Supported 00:09:40.740 Deallocated/Unwritten Error: Not Supported 00:09:40.740 Deallocated Read Value: Unknown 00:09:40.740 Deallocate in Write Zeroes: Not Supported 00:09:40.740 Deallocated Guard Field: 0xFFFF 00:09:40.740 Flush: Supported 00:09:40.740 Reservation: Supported 00:09:40.740 Namespace Sharing Capabilities: Multiple Controllers 00:09:40.740 Size (in LBAs): 131072 (0GiB) 00:09:40.740 Capacity (in LBAs): 131072 (0GiB) 00:09:40.740 Utilization (in LBAs): 131072 (0GiB) 00:09:40.740 NGUID: 11514F81760A48BAB47388FC330E1566 00:09:40.740 UUID: 11514f81-760a-48ba-b473-88fc330e1566 00:09:40.740 Thin Provisioning: Not Supported 00:09:40.740 Per-NS Atomic Units: Yes 00:09:40.740 Atomic Boundary Size (Normal): 0 00:09:40.740 Atomic Boundary Size (PFail): 0 00:09:40.740 Atomic Boundary Offset: 0 00:09:40.740 Maximum Single Source Range Length: 65535 00:09:40.740 Maximum Copy Length: 65535 00:09:40.740 Maximum Source Range Count: 1 00:09:40.740 NGUID/EUI64 Never Reused: No 00:09:40.740 Namespace Write Protected: No 00:09:40.740 Number of LBA Formats: 1 00:09:40.740 Current LBA Format: LBA Format #00 00:09:40.740 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:40.740 00:09:40.740 16:02:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:40.740 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.740 [2024-07-15 16:02:26.704845] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:46.071 Initializing NVMe Controllers 00:09:46.071 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:46.071 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:46.071 Initialization complete. Launching workers. 00:09:46.071 ======================================================== 00:09:46.071 Latency(us) 00:09:46.071 Device Information : IOPS MiB/s Average min max 00:09:46.071 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35060.49 136.96 3650.17 1165.26 8244.87 00:09:46.071 ======================================================== 00:09:46.071 Total : 35060.49 136.96 3650.17 1165.26 8244.87 00:09:46.071 00:09:46.071 [2024-07-15 16:02:31.729090] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:46.071 16:02:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:46.071 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.071 [2024-07-15 16:02:31.973262] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:51.346 Initializing NVMe Controllers 00:09:51.346 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:51.346 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:51.346 Initialization complete. Launching workers. 00:09:51.346 ======================================================== 00:09:51.346 Latency(us) 00:09:51.346 Device Information : IOPS MiB/s Average min max 00:09:51.346 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16042.48 62.67 7978.15 6982.54 8107.63 00:09:51.346 ======================================================== 00:09:51.346 Total : 16042.48 62.67 7978.15 6982.54 8107.63 00:09:51.346 00:09:51.346 [2024-07-15 16:02:37.009363] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:51.346 16:02:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:51.346 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.346 [2024-07-15 16:02:37.223395] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:56.625 [2024-07-15 16:02:42.287335] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:56.625 Initializing NVMe Controllers 00:09:56.625 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:56.625 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:56.625 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:09:56.625 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:09:56.625 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:09:56.625 Initialization complete. Launching workers. 00:09:56.625 Starting thread on core 2 00:09:56.625 Starting thread on core 3 00:09:56.625 Starting thread on core 1 00:09:56.625 16:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:09:56.625 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.625 [2024-07-15 16:02:42.596486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:59.910 [2024-07-15 16:02:45.658364] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:59.910 Initializing NVMe Controllers 00:09:59.910 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:59.910 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:59.910 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:09:59.910 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:09:59.910 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:09:59.910 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:09:59.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:09:59.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:09:59.910 Initialization complete. Launching workers. 00:09:59.910 Starting thread on core 1 with urgent priority queue 00:09:59.910 Starting thread on core 2 with urgent priority queue 00:09:59.910 Starting thread on core 3 with urgent priority queue 00:09:59.910 Starting thread on core 0 with urgent priority queue 00:09:59.910 SPDK bdev Controller (SPDK1 ) core 0: 5416.00 IO/s 18.46 secs/100000 ios 00:09:59.910 SPDK bdev Controller (SPDK1 ) core 1: 4802.00 IO/s 20.82 secs/100000 ios 00:09:59.910 SPDK bdev Controller (SPDK1 ) core 2: 5800.00 IO/s 17.24 secs/100000 ios 00:09:59.910 SPDK bdev Controller (SPDK1 ) core 3: 5944.00 IO/s 16.82 secs/100000 ios 00:09:59.910 ======================================================== 00:09:59.910 00:09:59.910 16:02:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:59.910 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.167 [2024-07-15 16:02:45.968528] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:00.167 Initializing NVMe Controllers 00:10:00.167 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:00.167 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:00.167 Namespace ID: 1 size: 0GB 00:10:00.167 Initialization complete. 00:10:00.167 INFO: using host memory buffer for IO 00:10:00.167 Hello world! 00:10:00.167 [2024-07-15 16:02:46.003107] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:00.167 16:02:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:00.167 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.427 [2024-07-15 16:02:46.302443] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:01.384 Initializing NVMe Controllers 00:10:01.384 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:01.384 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:01.384 Initialization complete. Launching workers. 00:10:01.384 submit (in ns) avg, min, max = 7213.2, 3491.1, 4029157.8 00:10:01.384 complete (in ns) avg, min, max = 27901.7, 2061.1, 4015073.3 00:10:01.384 00:10:01.384 Submit histogram 00:10:01.384 ================ 00:10:01.384 Range in us Cumulative Count 00:10:01.384 3.484 - 3.508: 0.0664% ( 9) 00:10:01.384 3.508 - 3.532: 0.5093% ( 60) 00:10:01.384 3.532 - 3.556: 1.7198% ( 164) 00:10:01.384 3.556 - 3.579: 4.7535% ( 411) 00:10:01.384 3.579 - 3.603: 9.6915% ( 669) 00:10:01.384 3.603 - 3.627: 16.6224% ( 939) 00:10:01.384 3.627 - 3.650: 24.6383% ( 1086) 00:10:01.384 3.650 - 3.674: 32.6247% ( 1082) 00:10:01.384 3.674 - 3.698: 39.3490% ( 911) 00:10:01.384 3.698 - 3.721: 47.6306% ( 1122) 00:10:01.384 3.721 - 3.745: 52.7901% ( 699) 00:10:01.384 3.745 - 3.769: 57.6838% ( 663) 00:10:01.384 3.769 - 3.793: 61.2267% ( 480) 00:10:01.384 3.793 - 3.816: 64.7328% ( 475) 00:10:01.384 3.816 - 3.840: 68.7851% ( 549) 00:10:01.384 3.840 - 3.864: 72.7487% ( 537) 00:10:01.384 3.864 - 3.887: 76.9708% ( 572) 00:10:01.384 3.887 - 3.911: 80.0930% ( 423) 00:10:01.384 3.911 - 3.935: 83.6212% ( 478) 00:10:01.384 3.935 - 3.959: 86.2563% ( 357) 00:10:01.384 3.959 - 3.982: 88.2418% ( 269) 00:10:01.384 3.982 - 4.006: 89.8214% ( 214) 00:10:01.384 4.006 - 4.030: 90.9581% ( 154) 00:10:01.384 4.030 - 4.053: 92.1169% ( 157) 00:10:01.384 4.053 - 4.077: 93.1355% ( 138) 00:10:01.384 4.077 - 4.101: 93.9770% ( 114) 00:10:01.384 4.101 - 4.124: 94.6560% ( 92) 00:10:01.384 4.124 - 4.148: 95.2908% ( 86) 00:10:01.384 4.148 - 4.172: 95.7484% ( 62) 00:10:01.384 4.172 - 4.196: 96.0289% ( 38) 00:10:01.384 4.196 - 4.219: 96.3020% ( 37) 00:10:01.384 4.219 - 4.243: 96.5530% ( 34) 00:10:01.384 4.243 - 4.267: 96.7080% ( 21) 00:10:01.384 4.267 - 4.290: 96.8778% ( 23) 00:10:01.384 4.290 - 4.314: 96.9221% ( 6) 00:10:01.384 4.314 - 4.338: 97.0106% ( 12) 00:10:01.384 4.338 - 4.361: 97.0771% ( 9) 00:10:01.384 4.361 - 4.385: 97.1066% ( 4) 00:10:01.384 4.385 - 4.409: 97.1435% ( 5) 00:10:01.384 4.409 - 4.433: 97.1952% ( 7) 00:10:01.384 4.433 - 4.456: 97.2321% ( 5) 00:10:01.384 4.456 - 4.480: 97.2468% ( 2) 00:10:01.384 4.480 - 4.504: 97.2690% ( 3) 00:10:01.384 4.504 - 4.527: 97.2911% ( 3) 00:10:01.384 4.527 - 4.551: 97.2985% ( 1) 00:10:01.384 4.551 - 4.575: 97.3133% ( 2) 00:10:01.384 4.622 - 4.646: 97.3206% ( 1) 00:10:01.384 4.670 - 4.693: 97.3354% ( 2) 00:10:01.384 4.693 - 4.717: 97.3575% ( 3) 00:10:01.384 4.717 - 4.741: 97.3723% ( 2) 00:10:01.384 4.741 - 4.764: 97.4018% ( 4) 00:10:01.384 4.764 - 4.788: 97.4830% ( 11) 00:10:01.384 4.788 - 4.812: 97.5125% ( 4) 00:10:01.384 4.812 - 4.836: 97.5716% ( 8) 00:10:01.384 4.836 - 4.859: 97.6011% ( 4) 00:10:01.384 4.859 - 4.883: 97.6528% ( 7) 00:10:01.384 4.883 - 4.907: 97.6897% ( 5) 00:10:01.384 4.907 - 4.930: 97.7487% ( 8) 00:10:01.384 4.930 - 4.954: 97.7857% ( 5) 00:10:01.384 4.954 - 4.978: 97.8152% ( 4) 00:10:01.384 4.978 - 5.001: 97.8668% ( 7) 00:10:01.384 5.001 - 5.025: 97.8964% ( 4) 00:10:01.384 5.025 - 5.049: 97.9111% ( 2) 00:10:01.384 5.049 - 5.073: 97.9333% ( 3) 00:10:01.384 5.073 - 5.096: 97.9628% ( 4) 00:10:01.384 5.096 - 5.120: 97.9849% ( 3) 00:10:01.384 5.120 - 5.144: 97.9997% ( 2) 00:10:01.384 5.144 - 5.167: 98.0145% ( 2) 00:10:01.384 5.167 - 5.191: 98.0366% ( 3) 00:10:01.384 5.191 - 5.215: 98.0514% ( 2) 00:10:01.384 5.215 - 5.239: 98.0735% ( 3) 00:10:01.384 5.239 - 5.262: 98.0809% ( 1) 00:10:01.384 5.262 - 5.286: 98.0957% ( 2) 00:10:01.384 5.286 - 5.310: 98.1030% ( 1) 00:10:01.384 5.333 - 5.357: 98.1252% ( 3) 00:10:01.384 5.357 - 5.381: 98.1326% ( 1) 00:10:01.384 5.381 - 5.404: 98.1399% ( 1) 00:10:01.384 5.452 - 5.476: 98.1473% ( 1) 00:10:01.384 5.476 - 5.499: 98.1621% ( 2) 00:10:01.384 5.547 - 5.570: 98.1695% ( 1) 00:10:01.384 5.618 - 5.641: 98.1769% ( 1) 00:10:01.384 5.665 - 5.689: 98.1842% ( 1) 00:10:01.384 5.807 - 5.831: 98.1916% ( 1) 00:10:01.384 5.879 - 5.902: 98.1990% ( 1) 00:10:01.384 5.997 - 6.021: 98.2064% ( 1) 00:10:01.384 6.116 - 6.163: 98.2211% ( 2) 00:10:01.384 6.163 - 6.210: 98.2359% ( 2) 00:10:01.384 6.210 - 6.258: 98.2433% ( 1) 00:10:01.384 6.353 - 6.400: 98.2580% ( 2) 00:10:01.384 6.447 - 6.495: 98.2654% ( 1) 00:10:01.384 6.732 - 6.779: 98.2728% ( 1) 00:10:01.384 6.921 - 6.969: 98.2802% ( 1) 00:10:01.384 7.159 - 7.206: 98.2876% ( 1) 00:10:01.384 7.206 - 7.253: 98.2950% ( 1) 00:10:01.384 7.348 - 7.396: 98.3097% ( 2) 00:10:01.384 7.396 - 7.443: 98.3245% ( 2) 00:10:01.384 7.443 - 7.490: 98.3319% ( 1) 00:10:01.384 7.538 - 7.585: 98.3392% ( 1) 00:10:01.384 7.585 - 7.633: 98.3466% ( 1) 00:10:01.384 7.633 - 7.680: 98.3540% ( 1) 00:10:01.384 7.680 - 7.727: 98.3688% ( 2) 00:10:01.384 7.775 - 7.822: 98.3835% ( 2) 00:10:01.384 7.917 - 7.964: 98.3909% ( 1) 00:10:01.384 7.964 - 8.012: 98.4057% ( 2) 00:10:01.384 8.012 - 8.059: 98.4130% ( 1) 00:10:01.384 8.059 - 8.107: 98.4204% ( 1) 00:10:01.384 8.107 - 8.154: 98.4278% ( 1) 00:10:01.384 8.154 - 8.201: 98.4426% ( 2) 00:10:01.384 8.201 - 8.249: 98.4500% ( 1) 00:10:01.384 8.249 - 8.296: 98.4573% ( 1) 00:10:01.384 8.296 - 8.344: 98.4721% ( 2) 00:10:01.384 8.391 - 8.439: 98.4869% ( 2) 00:10:01.384 8.439 - 8.486: 98.5164% ( 4) 00:10:01.384 8.533 - 8.581: 98.5238% ( 1) 00:10:01.384 8.581 - 8.628: 98.5311% ( 1) 00:10:01.384 8.770 - 8.818: 98.5385% ( 1) 00:10:01.384 8.818 - 8.865: 98.5754% ( 5) 00:10:01.384 9.007 - 9.055: 98.5828% ( 1) 00:10:01.384 9.102 - 9.150: 98.5976% ( 2) 00:10:01.384 9.292 - 9.339: 98.6050% ( 1) 00:10:01.384 9.387 - 9.434: 98.6197% ( 2) 00:10:01.384 9.481 - 9.529: 98.6345% ( 2) 00:10:01.384 9.529 - 9.576: 98.6419% ( 1) 00:10:01.384 9.576 - 9.624: 98.6492% ( 1) 00:10:01.385 9.671 - 9.719: 98.6566% ( 1) 00:10:01.385 9.861 - 9.908: 98.6935% ( 5) 00:10:01.385 10.003 - 10.050: 98.7083% ( 2) 00:10:01.385 10.050 - 10.098: 98.7157% ( 1) 00:10:01.385 10.145 - 10.193: 98.7231% ( 1) 00:10:01.385 10.382 - 10.430: 98.7304% ( 1) 00:10:01.385 10.430 - 10.477: 98.7378% ( 1) 00:10:01.385 10.477 - 10.524: 98.7452% ( 1) 00:10:01.385 10.524 - 10.572: 98.7526% ( 1) 00:10:01.385 10.714 - 10.761: 98.7600% ( 1) 00:10:01.385 10.761 - 10.809: 98.7673% ( 1) 00:10:01.385 10.809 - 10.856: 98.7747% ( 1) 00:10:01.385 10.904 - 10.951: 98.7895% ( 2) 00:10:01.385 10.951 - 10.999: 98.7969% ( 1) 00:10:01.385 11.283 - 11.330: 98.8116% ( 2) 00:10:01.385 11.330 - 11.378: 98.8264% ( 2) 00:10:01.385 11.520 - 11.567: 98.8338% ( 1) 00:10:01.385 11.567 - 11.615: 98.8412% ( 1) 00:10:01.385 11.615 - 11.662: 98.8485% ( 1) 00:10:01.385 11.662 - 11.710: 98.8559% ( 1) 00:10:01.385 11.757 - 11.804: 98.8633% ( 1) 00:10:01.385 11.804 - 11.852: 98.8707% ( 1) 00:10:01.385 12.089 - 12.136: 98.8781% ( 1) 00:10:01.385 12.136 - 12.231: 98.8854% ( 1) 00:10:01.385 12.326 - 12.421: 98.8928% ( 1) 00:10:01.385 12.516 - 12.610: 98.9002% ( 1) 00:10:01.385 12.610 - 12.705: 98.9076% ( 1) 00:10:01.385 13.084 - 13.179: 98.9150% ( 1) 00:10:01.385 13.274 - 13.369: 98.9224% ( 1) 00:10:01.385 13.464 - 13.559: 98.9297% ( 1) 00:10:01.385 13.559 - 13.653: 98.9371% ( 1) 00:10:01.385 13.653 - 13.748: 98.9445% ( 1) 00:10:01.385 13.748 - 13.843: 98.9519% ( 1) 00:10:01.385 13.938 - 14.033: 98.9593% ( 1) 00:10:01.385 14.033 - 14.127: 98.9740% ( 2) 00:10:01.385 14.127 - 14.222: 98.9888% ( 2) 00:10:01.385 14.222 - 14.317: 98.9962% ( 1) 00:10:01.385 14.412 - 14.507: 99.0035% ( 1) 00:10:01.385 14.886 - 14.981: 99.0183% ( 2) 00:10:01.385 14.981 - 15.076: 99.0257% ( 1) 00:10:01.385 15.076 - 15.170: 99.0331% ( 1) 00:10:01.385 16.972 - 17.067: 99.0478% ( 2) 00:10:01.385 17.256 - 17.351: 99.0626% ( 2) 00:10:01.385 17.351 - 17.446: 99.0847% ( 3) 00:10:01.385 17.636 - 17.730: 99.1069% ( 3) 00:10:01.385 17.730 - 17.825: 99.1143% ( 1) 00:10:01.385 17.825 - 17.920: 99.1733% ( 8) 00:10:01.385 17.920 - 18.015: 99.2102% ( 5) 00:10:01.385 18.015 - 18.110: 99.2988% ( 12) 00:10:01.385 18.110 - 18.204: 99.3505% ( 7) 00:10:01.385 18.204 - 18.299: 99.3947% ( 6) 00:10:01.385 18.299 - 18.394: 99.4759% ( 11) 00:10:01.385 18.394 - 18.489: 99.5571% ( 11) 00:10:01.385 18.489 - 18.584: 99.6014% ( 6) 00:10:01.385 18.584 - 18.679: 99.6383% ( 5) 00:10:01.385 18.679 - 18.773: 99.7048% ( 9) 00:10:01.385 18.773 - 18.868: 99.7343% ( 4) 00:10:01.385 18.868 - 18.963: 99.7712% ( 5) 00:10:01.385 18.963 - 19.058: 99.7786% ( 1) 00:10:01.385 19.058 - 19.153: 99.7933% ( 2) 00:10:01.385 19.153 - 19.247: 99.8007% ( 1) 00:10:01.385 19.247 - 19.342: 99.8155% ( 2) 00:10:01.385 19.342 - 19.437: 99.8229% ( 1) 00:10:01.385 19.437 - 19.532: 99.8302% ( 1) 00:10:01.385 19.721 - 19.816: 99.8450% ( 2) 00:10:01.385 19.911 - 20.006: 99.8524% ( 1) 00:10:01.385 20.670 - 20.764: 99.8598% ( 1) 00:10:01.385 23.609 - 23.704: 99.8671% ( 1) 00:10:01.385 24.273 - 24.462: 99.8745% ( 1) 00:10:01.385 26.169 - 26.359: 99.8819% ( 1) 00:10:01.385 27.496 - 27.686: 99.8893% ( 1) 00:10:01.385 27.686 - 27.876: 99.8967% ( 1) 00:10:01.385 28.255 - 28.444: 99.9040% ( 1) 00:10:01.385 32.616 - 32.806: 99.9114% ( 1) 00:10:01.385 33.754 - 33.944: 99.9188% ( 1) 00:10:01.385 3980.705 - 4004.978: 99.9779% ( 8) 00:10:01.385 4004.978 - 4029.250: 100.0000% ( 3) 00:10:01.385 00:10:01.385 Complete histogram 00:10:01.385 ================== 00:10:01.385 Range in us Cumulative Count 00:10:01.385 2.050 - 2.062: 0.0590% ( 8) 00:10:01.385 2.062 - 2.074: 25.0295% ( 3383) 00:10:01.385 2.074 - 2.086: 40.3897% ( 2081) 00:10:01.385 2.086 - 2.098: 42.6041% ( 300) 00:10:01.385 2.098 - 2.110: 57.2557% ( 1985) 00:10:01.385 2.110 - 2.121: 61.2784% ( 545) 00:10:01.385 2.121 - 2.133: 63.4854% ( 299) 00:10:01.385 2.133 - 2.145: 73.3983% ( 1343) 00:10:01.385 2.145 - 2.157: 76.0186% ( 355) 00:10:01.385 2.157 - 2.169: 77.7163% ( 230) 00:10:01.385 2.169 - 2.181: 81.3183% ( 488) 00:10:01.385 2.181 - 2.193: 82.3369% ( 138) 00:10:01.385 2.193 - 2.204: 83.3186% ( 133) 00:10:01.385 2.204 - 2.216: 87.2970% ( 539) 00:10:01.385 2.216 - 2.228: 89.4006% ( 285) 00:10:01.385 2.228 - 2.240: 91.2164% ( 246) 00:10:01.385 2.240 - 2.252: 93.4308% ( 300) 00:10:01.385 2.252 - 2.264: 94.0803% ( 88) 00:10:01.385 2.264 - 2.276: 94.3239% ( 33) 00:10:01.385 2.276 - 2.287: 94.6929% ( 50) 00:10:01.385 2.287 - 2.299: 95.1506% ( 62) 00:10:01.385 2.299 - 2.311: 95.5861% ( 59) 00:10:01.385 2.311 - 2.323: 95.8075% ( 30) 00:10:01.385 2.323 - 2.335: 95.8961% ( 12) 00:10:01.385 2.335 - 2.347: 95.9551% ( 8) 00:10:01.385 2.347 - 2.359: 96.0658% ( 15) 00:10:01.385 2.359 - 2.370: 96.2873% ( 30) 00:10:01.385 2.370 - 2.382: 96.7006% ( 56) 00:10:01.385 2.382 - 2.394: 97.0106% ( 42) 00:10:01.385 2.394 - 2.406: 97.2173% ( 28) 00:10:01.385 2.406 - 2.418: 97.3575% ( 19) 00:10:01.385 2.418 - 2.430: 97.5199% ( 22) 00:10:01.385 2.430 - 2.441: 97.7118% ( 26) 00:10:01.385 2.441 - 2.453: 97.8078% ( 13) 00:10:01.385 2.453 - 2.465: 97.9628% ( 21) 00:10:01.385 2.465 - 2.477: 98.0661% ( 14) 00:10:01.385 2.477 - 2.489: 98.1547% ( 12) 00:10:01.385 2.489 - 2.501: 98.1990% ( 6) 00:10:01.385 2.501 - 2.513: 98.2654% ( 9) 00:10:01.385 2.513 - 2.524: 98.3023% ( 5) 00:10:01.385 2.524 - 2.536: 98.3466% ( 6) 00:10:01.385 2.536 - 2.548: 98.3688% ( 3) 00:10:01.385 2.548 - 2.560: 98.3909% ( 3) 00:10:01.385 2.560 - 2.572: 98.4130% ( 3) 00:10:01.385 2.572 - 2.584: 98.4278% ( 2) 00:10:01.385 2.584 - 2.596: 98.4352% ( 1) 00:10:01.385 2.596 - 2.607: 98.4426% ( 1) 00:10:01.385 2.607 - 2.619: 98.4573% ( 2) 00:10:01.385 2.631 - 2.643: 98.4721% ( 2) 00:10:01.385 2.643 - 2.655: 98.4795% ( 1) 00:10:01.385 2.667 - 2.679: 9[2024-07-15 16:02:47.321685] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:01.385 8.4869% ( 1) 00:10:01.385 2.679 - 2.690: 98.4942% ( 1) 00:10:01.385 2.702 - 2.714: 98.5090% ( 2) 00:10:01.385 2.785 - 2.797: 98.5238% ( 2) 00:10:01.385 3.153 - 3.176: 98.5311% ( 1) 00:10:01.385 3.342 - 3.366: 98.5385% ( 1) 00:10:01.385 3.390 - 3.413: 98.5459% ( 1) 00:10:01.385 3.437 - 3.461: 98.5607% ( 2) 00:10:01.385 3.484 - 3.508: 98.5754% ( 2) 00:10:01.385 3.532 - 3.556: 98.5902% ( 2) 00:10:01.385 3.603 - 3.627: 98.6050% ( 2) 00:10:01.385 3.627 - 3.650: 98.6197% ( 2) 00:10:01.385 3.674 - 3.698: 98.6271% ( 1) 00:10:01.385 3.698 - 3.721: 98.6345% ( 1) 00:10:01.385 3.721 - 3.745: 98.6492% ( 2) 00:10:01.385 3.793 - 3.816: 98.6566% ( 1) 00:10:01.385 3.911 - 3.935: 98.6714% ( 2) 00:10:01.385 3.935 - 3.959: 98.6788% ( 1) 00:10:01.385 4.006 - 4.030: 98.6862% ( 1) 00:10:01.385 5.381 - 5.404: 98.6935% ( 1) 00:10:01.385 5.404 - 5.428: 98.7009% ( 1) 00:10:01.385 5.523 - 5.547: 98.7083% ( 1) 00:10:01.385 5.689 - 5.713: 98.7157% ( 1) 00:10:01.385 6.163 - 6.210: 98.7231% ( 1) 00:10:01.385 6.305 - 6.353: 98.7304% ( 1) 00:10:01.385 6.447 - 6.495: 98.7378% ( 1) 00:10:01.385 6.542 - 6.590: 98.7452% ( 1) 00:10:01.385 6.779 - 6.827: 98.7526% ( 1) 00:10:01.385 6.827 - 6.874: 98.7673% ( 2) 00:10:01.385 6.969 - 7.016: 98.7747% ( 1) 00:10:01.385 7.159 - 7.206: 98.7821% ( 1) 00:10:01.385 7.490 - 7.538: 98.7895% ( 1) 00:10:01.385 7.585 - 7.633: 98.7969% ( 1) 00:10:01.385 7.680 - 7.727: 98.8043% ( 1) 00:10:01.385 8.581 - 8.628: 98.8116% ( 1) 00:10:01.385 15.644 - 15.739: 98.8190% ( 1) 00:10:01.385 15.929 - 16.024: 98.8559% ( 5) 00:10:01.385 16.024 - 16.119: 98.8854% ( 4) 00:10:01.385 16.119 - 16.213: 98.9224% ( 5) 00:10:01.385 16.213 - 16.308: 98.9666% ( 6) 00:10:01.385 16.308 - 16.403: 99.0035% ( 5) 00:10:01.385 16.403 - 16.498: 99.0331% ( 4) 00:10:01.385 16.498 - 16.593: 99.0774% ( 6) 00:10:01.385 16.593 - 16.687: 99.1216% ( 6) 00:10:01.385 16.687 - 16.782: 99.1955% ( 10) 00:10:01.385 16.782 - 16.877: 99.2176% ( 3) 00:10:01.385 16.877 - 16.972: 99.2471% ( 4) 00:10:01.385 16.972 - 17.067: 99.2766% ( 4) 00:10:01.385 17.067 - 17.161: 99.2840% ( 1) 00:10:01.385 17.161 - 17.256: 99.2914% ( 1) 00:10:01.385 17.256 - 17.351: 99.2988% ( 1) 00:10:01.385 17.351 - 17.446: 99.3062% ( 1) 00:10:01.385 17.446 - 17.541: 99.3136% ( 1) 00:10:01.385 17.636 - 17.730: 99.3283% ( 2) 00:10:01.385 17.730 - 17.825: 99.3357% ( 1) 00:10:01.385 18.204 - 18.299: 99.3431% ( 1) 00:10:01.385 20.006 - 20.101: 99.3505% ( 1) 00:10:01.385 85.713 - 86.092: 99.3578% ( 1) 00:10:01.386 3980.705 - 4004.978: 99.9262% ( 77) 00:10:01.386 4004.978 - 4029.250: 100.0000% ( 10) 00:10:01.386 00:10:01.386 16:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:01.386 16:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:01.386 16:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:01.386 16:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:01.386 16:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:01.950 [ 00:10:01.950 { 00:10:01.950 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:01.950 "subtype": "Discovery", 00:10:01.950 "listen_addresses": [], 00:10:01.950 "allow_any_host": true, 00:10:01.950 "hosts": [] 00:10:01.950 }, 00:10:01.950 { 00:10:01.950 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:01.950 "subtype": "NVMe", 00:10:01.950 "listen_addresses": [ 00:10:01.950 { 00:10:01.950 "trtype": "VFIOUSER", 00:10:01.950 "adrfam": "IPv4", 00:10:01.950 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:01.950 "trsvcid": "0" 00:10:01.950 } 00:10:01.950 ], 00:10:01.951 "allow_any_host": true, 00:10:01.951 "hosts": [], 00:10:01.951 "serial_number": "SPDK1", 00:10:01.951 "model_number": "SPDK bdev Controller", 00:10:01.951 "max_namespaces": 32, 00:10:01.951 "min_cntlid": 1, 00:10:01.951 "max_cntlid": 65519, 00:10:01.951 "namespaces": [ 00:10:01.951 { 00:10:01.951 "nsid": 1, 00:10:01.951 "bdev_name": "Malloc1", 00:10:01.951 "name": "Malloc1", 00:10:01.951 "nguid": "11514F81760A48BAB47388FC330E1566", 00:10:01.951 "uuid": "11514f81-760a-48ba-b473-88fc330e1566" 00:10:01.951 } 00:10:01.951 ] 00:10:01.951 }, 00:10:01.951 { 00:10:01.951 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:01.951 "subtype": "NVMe", 00:10:01.951 "listen_addresses": [ 00:10:01.951 { 00:10:01.951 "trtype": "VFIOUSER", 00:10:01.951 "adrfam": "IPv4", 00:10:01.951 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:01.951 "trsvcid": "0" 00:10:01.951 } 00:10:01.951 ], 00:10:01.951 "allow_any_host": true, 00:10:01.951 "hosts": [], 00:10:01.951 "serial_number": "SPDK2", 00:10:01.951 "model_number": "SPDK bdev Controller", 00:10:01.951 "max_namespaces": 32, 00:10:01.951 "min_cntlid": 1, 00:10:01.951 "max_cntlid": 65519, 00:10:01.951 "namespaces": [ 00:10:01.951 { 00:10:01.951 "nsid": 1, 00:10:01.951 "bdev_name": "Malloc2", 00:10:01.951 "name": "Malloc2", 00:10:01.951 "nguid": "7EBEE552E0DE47BE887F09194B87CE8B", 00:10:01.951 "uuid": "7ebee552-e0de-47be-887f-09194b87ce8b" 00:10:01.951 } 00:10:01.951 ] 00:10:01.951 } 00:10:01.951 ] 00:10:01.951 16:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:01.951 16:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=729664 00:10:01.951 16:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:01.951 16:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:01.951 16:02:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:01.951 16:02:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:01.951 16:02:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:01.951 16:02:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:01.951 16:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:01.951 16:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:01.951 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.951 [2024-07-15 16:02:47.827442] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:01.951 Malloc3 00:10:01.951 16:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:02.208 [2024-07-15 16:02:48.189092] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:02.208 16:02:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:02.466 Asynchronous Event Request test 00:10:02.466 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:02.466 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:02.466 Registering asynchronous event callbacks... 00:10:02.466 Starting namespace attribute notice tests for all controllers... 00:10:02.466 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:02.466 aer_cb - Changed Namespace 00:10:02.466 Cleaning up... 00:10:02.466 [ 00:10:02.466 { 00:10:02.466 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:02.466 "subtype": "Discovery", 00:10:02.466 "listen_addresses": [], 00:10:02.466 "allow_any_host": true, 00:10:02.466 "hosts": [] 00:10:02.466 }, 00:10:02.466 { 00:10:02.466 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:02.466 "subtype": "NVMe", 00:10:02.466 "listen_addresses": [ 00:10:02.466 { 00:10:02.466 "trtype": "VFIOUSER", 00:10:02.466 "adrfam": "IPv4", 00:10:02.466 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:02.466 "trsvcid": "0" 00:10:02.466 } 00:10:02.466 ], 00:10:02.466 "allow_any_host": true, 00:10:02.466 "hosts": [], 00:10:02.466 "serial_number": "SPDK1", 00:10:02.466 "model_number": "SPDK bdev Controller", 00:10:02.466 "max_namespaces": 32, 00:10:02.466 "min_cntlid": 1, 00:10:02.466 "max_cntlid": 65519, 00:10:02.466 "namespaces": [ 00:10:02.466 { 00:10:02.466 "nsid": 1, 00:10:02.466 "bdev_name": "Malloc1", 00:10:02.466 "name": "Malloc1", 00:10:02.466 "nguid": "11514F81760A48BAB47388FC330E1566", 00:10:02.466 "uuid": "11514f81-760a-48ba-b473-88fc330e1566" 00:10:02.466 }, 00:10:02.466 { 00:10:02.466 "nsid": 2, 00:10:02.466 "bdev_name": "Malloc3", 00:10:02.466 "name": "Malloc3", 00:10:02.466 "nguid": "D7B351CBF59546E8A5029EC731D343C1", 00:10:02.466 "uuid": "d7b351cb-f595-46e8-a502-9ec731d343c1" 00:10:02.466 } 00:10:02.466 ] 00:10:02.466 }, 00:10:02.466 { 00:10:02.466 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:02.466 "subtype": "NVMe", 00:10:02.466 "listen_addresses": [ 00:10:02.466 { 00:10:02.466 "trtype": "VFIOUSER", 00:10:02.466 "adrfam": "IPv4", 00:10:02.466 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:02.466 "trsvcid": "0" 00:10:02.466 } 00:10:02.466 ], 00:10:02.466 "allow_any_host": true, 00:10:02.466 "hosts": [], 00:10:02.466 "serial_number": "SPDK2", 00:10:02.466 "model_number": "SPDK bdev Controller", 00:10:02.466 "max_namespaces": 32, 00:10:02.466 "min_cntlid": 1, 00:10:02.466 "max_cntlid": 65519, 00:10:02.466 "namespaces": [ 00:10:02.466 { 00:10:02.466 "nsid": 1, 00:10:02.466 "bdev_name": "Malloc2", 00:10:02.466 "name": "Malloc2", 00:10:02.466 "nguid": "7EBEE552E0DE47BE887F09194B87CE8B", 00:10:02.466 "uuid": "7ebee552-e0de-47be-887f-09194b87ce8b" 00:10:02.466 } 00:10:02.466 ] 00:10:02.466 } 00:10:02.466 ] 00:10:02.466 16:02:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 729664 00:10:02.466 16:02:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:02.466 16:02:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:02.466 16:02:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:02.466 16:02:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:02.466 [2024-07-15 16:02:48.461564] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:02.466 [2024-07-15 16:02:48.461608] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729801 ] 00:10:02.724 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.724 [2024-07-15 16:02:48.496096] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:02.724 [2024-07-15 16:02:48.502240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:02.724 [2024-07-15 16:02:48.502286] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff1bdfb8000 00:10:02.724 [2024-07-15 16:02:48.503241] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:02.724 [2024-07-15 16:02:48.504265] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:02.724 [2024-07-15 16:02:48.505272] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:02.724 [2024-07-15 16:02:48.506277] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:02.724 [2024-07-15 16:02:48.507296] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:02.724 [2024-07-15 16:02:48.508303] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:02.724 [2024-07-15 16:02:48.509309] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:02.724 [2024-07-15 16:02:48.510321] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:02.724 [2024-07-15 16:02:48.511327] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:02.724 [2024-07-15 16:02:48.511348] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff1bdfad000 00:10:02.724 [2024-07-15 16:02:48.512472] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:02.724 [2024-07-15 16:02:48.526627] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:02.724 [2024-07-15 16:02:48.526663] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:02.724 [2024-07-15 16:02:48.531768] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:02.724 [2024-07-15 16:02:48.531818] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:02.724 [2024-07-15 16:02:48.531899] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:02.724 [2024-07-15 16:02:48.531922] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:02.724 [2024-07-15 16:02:48.531932] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:02.724 [2024-07-15 16:02:48.532778] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:02.724 [2024-07-15 16:02:48.532805] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:02.724 [2024-07-15 16:02:48.532818] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:02.724 [2024-07-15 16:02:48.533781] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:02.724 [2024-07-15 16:02:48.533801] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:02.724 [2024-07-15 16:02:48.533815] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:02.724 [2024-07-15 16:02:48.534788] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:02.724 [2024-07-15 16:02:48.534809] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:02.724 [2024-07-15 16:02:48.535795] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:02.724 [2024-07-15 16:02:48.535815] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:02.724 [2024-07-15 16:02:48.535824] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:02.724 [2024-07-15 16:02:48.535836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:02.724 [2024-07-15 16:02:48.535950] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:02.724 [2024-07-15 16:02:48.535982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:02.724 [2024-07-15 16:02:48.535993] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:02.724 [2024-07-15 16:02:48.536806] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:02.724 [2024-07-15 16:02:48.537811] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:02.724 [2024-07-15 16:02:48.538819] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:02.724 [2024-07-15 16:02:48.539820] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:02.724 [2024-07-15 16:02:48.539905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:02.724 [2024-07-15 16:02:48.540843] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:02.724 [2024-07-15 16:02:48.540863] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:02.724 [2024-07-15 16:02:48.540872] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.540896] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:02.724 [2024-07-15 16:02:48.540912] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.540951] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:02.724 [2024-07-15 16:02:48.540973] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:02.724 [2024-07-15 16:02:48.540994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:02.724 [2024-07-15 16:02:48.548972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:02.724 [2024-07-15 16:02:48.548995] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:02.724 [2024-07-15 16:02:48.549009] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:02.724 [2024-07-15 16:02:48.549018] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:02.724 [2024-07-15 16:02:48.549026] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:02.724 [2024-07-15 16:02:48.549034] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:02.724 [2024-07-15 16:02:48.549043] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:02.724 [2024-07-15 16:02:48.549051] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.549064] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.549081] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:02.724 [2024-07-15 16:02:48.556966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:02.724 [2024-07-15 16:02:48.556995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:02.724 [2024-07-15 16:02:48.557011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:02.724 [2024-07-15 16:02:48.557024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:02.724 [2024-07-15 16:02:48.557036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:02.724 [2024-07-15 16:02:48.557045] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.557061] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.557077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:02.724 [2024-07-15 16:02:48.564965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:02.724 [2024-07-15 16:02:48.564983] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:02.724 [2024-07-15 16:02:48.564993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.565006] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.565021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.565036] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:02.724 [2024-07-15 16:02:48.572967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:02.724 [2024-07-15 16:02:48.573037] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.573052] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.573066] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:02.724 [2024-07-15 16:02:48.573075] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:02.724 [2024-07-15 16:02:48.573085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:02.724 [2024-07-15 16:02:48.580968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:02.724 [2024-07-15 16:02:48.580991] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:02.724 [2024-07-15 16:02:48.581007] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.581021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.581034] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:02.724 [2024-07-15 16:02:48.581043] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:02.724 [2024-07-15 16:02:48.581053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:02.724 [2024-07-15 16:02:48.588966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:02.724 [2024-07-15 16:02:48.588994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.589011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.589025] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:02.724 [2024-07-15 16:02:48.589034] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:02.724 [2024-07-15 16:02:48.589044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:02.724 [2024-07-15 16:02:48.596979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:02.724 [2024-07-15 16:02:48.597001] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.597014] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.597029] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.597040] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.597052] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.597060] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.597069] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:02.724 [2024-07-15 16:02:48.597077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:02.724 [2024-07-15 16:02:48.597085] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:02.725 [2024-07-15 16:02:48.597109] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:02.725 [2024-07-15 16:02:48.604967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:02.725 [2024-07-15 16:02:48.604993] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:02.725 [2024-07-15 16:02:48.612980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:02.725 [2024-07-15 16:02:48.613005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:02.725 [2024-07-15 16:02:48.620982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:02.725 [2024-07-15 16:02:48.621007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:02.725 [2024-07-15 16:02:48.628982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:02.725 [2024-07-15 16:02:48.629015] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:02.725 [2024-07-15 16:02:48.629027] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:02.725 [2024-07-15 16:02:48.629033] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:02.725 [2024-07-15 16:02:48.629040] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:02.725 [2024-07-15 16:02:48.629050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:02.725 [2024-07-15 16:02:48.629062] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:02.725 [2024-07-15 16:02:48.629070] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:02.725 [2024-07-15 16:02:48.629080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:02.725 [2024-07-15 16:02:48.629091] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:02.725 [2024-07-15 16:02:48.629099] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:02.725 [2024-07-15 16:02:48.629108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:02.725 [2024-07-15 16:02:48.629120] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:02.725 [2024-07-15 16:02:48.629129] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:02.725 [2024-07-15 16:02:48.629138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:02.725 [2024-07-15 16:02:48.636968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:02.725 [2024-07-15 16:02:48.636995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:02.725 [2024-07-15 16:02:48.637014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:02.725 [2024-07-15 16:02:48.637026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:02.725 ===================================================== 00:10:02.725 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:02.725 ===================================================== 00:10:02.725 Controller Capabilities/Features 00:10:02.725 ================================ 00:10:02.725 Vendor ID: 4e58 00:10:02.725 Subsystem Vendor ID: 4e58 00:10:02.725 Serial Number: SPDK2 00:10:02.725 Model Number: SPDK bdev Controller 00:10:02.725 Firmware Version: 24.09 00:10:02.725 Recommended Arb Burst: 6 00:10:02.725 IEEE OUI Identifier: 8d 6b 50 00:10:02.725 Multi-path I/O 00:10:02.725 May have multiple subsystem ports: Yes 00:10:02.725 May have multiple controllers: Yes 00:10:02.725 Associated with SR-IOV VF: No 00:10:02.725 Max Data Transfer Size: 131072 00:10:02.725 Max Number of Namespaces: 32 00:10:02.725 Max Number of I/O Queues: 127 00:10:02.725 NVMe Specification Version (VS): 1.3 00:10:02.725 NVMe Specification Version (Identify): 1.3 00:10:02.725 Maximum Queue Entries: 256 00:10:02.725 Contiguous Queues Required: Yes 00:10:02.725 Arbitration Mechanisms Supported 00:10:02.725 Weighted Round Robin: Not Supported 00:10:02.725 Vendor Specific: Not Supported 00:10:02.725 Reset Timeout: 15000 ms 00:10:02.725 Doorbell Stride: 4 bytes 00:10:02.725 NVM Subsystem Reset: Not Supported 00:10:02.725 Command Sets Supported 00:10:02.725 NVM Command Set: Supported 00:10:02.725 Boot Partition: Not Supported 00:10:02.725 Memory Page Size Minimum: 4096 bytes 00:10:02.725 Memory Page Size Maximum: 4096 bytes 00:10:02.725 Persistent Memory Region: Not Supported 00:10:02.725 Optional Asynchronous Events Supported 00:10:02.725 Namespace Attribute Notices: Supported 00:10:02.725 Firmware Activation Notices: Not Supported 00:10:02.725 ANA Change Notices: Not Supported 00:10:02.725 PLE Aggregate Log Change Notices: Not Supported 00:10:02.725 LBA Status Info Alert Notices: Not Supported 00:10:02.725 EGE Aggregate Log Change Notices: Not Supported 00:10:02.725 Normal NVM Subsystem Shutdown event: Not Supported 00:10:02.725 Zone Descriptor Change Notices: Not Supported 00:10:02.725 Discovery Log Change Notices: Not Supported 00:10:02.725 Controller Attributes 00:10:02.725 128-bit Host Identifier: Supported 00:10:02.725 Non-Operational Permissive Mode: Not Supported 00:10:02.725 NVM Sets: Not Supported 00:10:02.725 Read Recovery Levels: Not Supported 00:10:02.725 Endurance Groups: Not Supported 00:10:02.725 Predictable Latency Mode: Not Supported 00:10:02.725 Traffic Based Keep ALive: Not Supported 00:10:02.725 Namespace Granularity: Not Supported 00:10:02.725 SQ Associations: Not Supported 00:10:02.725 UUID List: Not Supported 00:10:02.725 Multi-Domain Subsystem: Not Supported 00:10:02.725 Fixed Capacity Management: Not Supported 00:10:02.725 Variable Capacity Management: Not Supported 00:10:02.725 Delete Endurance Group: Not Supported 00:10:02.725 Delete NVM Set: Not Supported 00:10:02.725 Extended LBA Formats Supported: Not Supported 00:10:02.725 Flexible Data Placement Supported: Not Supported 00:10:02.725 00:10:02.725 Controller Memory Buffer Support 00:10:02.725 ================================ 00:10:02.725 Supported: No 00:10:02.725 00:10:02.725 Persistent Memory Region Support 00:10:02.725 ================================ 00:10:02.725 Supported: No 00:10:02.725 00:10:02.725 Admin Command Set Attributes 00:10:02.725 ============================ 00:10:02.725 Security Send/Receive: Not Supported 00:10:02.725 Format NVM: Not Supported 00:10:02.725 Firmware Activate/Download: Not Supported 00:10:02.725 Namespace Management: Not Supported 00:10:02.725 Device Self-Test: Not Supported 00:10:02.725 Directives: Not Supported 00:10:02.725 NVMe-MI: Not Supported 00:10:02.725 Virtualization Management: Not Supported 00:10:02.725 Doorbell Buffer Config: Not Supported 00:10:02.725 Get LBA Status Capability: Not Supported 00:10:02.725 Command & Feature Lockdown Capability: Not Supported 00:10:02.725 Abort Command Limit: 4 00:10:02.725 Async Event Request Limit: 4 00:10:02.725 Number of Firmware Slots: N/A 00:10:02.725 Firmware Slot 1 Read-Only: N/A 00:10:02.725 Firmware Activation Without Reset: N/A 00:10:02.725 Multiple Update Detection Support: N/A 00:10:02.725 Firmware Update Granularity: No Information Provided 00:10:02.725 Per-Namespace SMART Log: No 00:10:02.725 Asymmetric Namespace Access Log Page: Not Supported 00:10:02.725 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:02.725 Command Effects Log Page: Supported 00:10:02.725 Get Log Page Extended Data: Supported 00:10:02.725 Telemetry Log Pages: Not Supported 00:10:02.725 Persistent Event Log Pages: Not Supported 00:10:02.725 Supported Log Pages Log Page: May Support 00:10:02.725 Commands Supported & Effects Log Page: Not Supported 00:10:02.725 Feature Identifiers & Effects Log Page:May Support 00:10:02.725 NVMe-MI Commands & Effects Log Page: May Support 00:10:02.725 Data Area 4 for Telemetry Log: Not Supported 00:10:02.725 Error Log Page Entries Supported: 128 00:10:02.725 Keep Alive: Supported 00:10:02.725 Keep Alive Granularity: 10000 ms 00:10:02.725 00:10:02.725 NVM Command Set Attributes 00:10:02.725 ========================== 00:10:02.725 Submission Queue Entry Size 00:10:02.725 Max: 64 00:10:02.725 Min: 64 00:10:02.725 Completion Queue Entry Size 00:10:02.725 Max: 16 00:10:02.725 Min: 16 00:10:02.725 Number of Namespaces: 32 00:10:02.725 Compare Command: Supported 00:10:02.725 Write Uncorrectable Command: Not Supported 00:10:02.725 Dataset Management Command: Supported 00:10:02.725 Write Zeroes Command: Supported 00:10:02.725 Set Features Save Field: Not Supported 00:10:02.725 Reservations: Not Supported 00:10:02.725 Timestamp: Not Supported 00:10:02.725 Copy: Supported 00:10:02.725 Volatile Write Cache: Present 00:10:02.725 Atomic Write Unit (Normal): 1 00:10:02.725 Atomic Write Unit (PFail): 1 00:10:02.725 Atomic Compare & Write Unit: 1 00:10:02.725 Fused Compare & Write: Supported 00:10:02.725 Scatter-Gather List 00:10:02.725 SGL Command Set: Supported (Dword aligned) 00:10:02.725 SGL Keyed: Not Supported 00:10:02.725 SGL Bit Bucket Descriptor: Not Supported 00:10:02.725 SGL Metadata Pointer: Not Supported 00:10:02.725 Oversized SGL: Not Supported 00:10:02.725 SGL Metadata Address: Not Supported 00:10:02.725 SGL Offset: Not Supported 00:10:02.725 Transport SGL Data Block: Not Supported 00:10:02.725 Replay Protected Memory Block: Not Supported 00:10:02.725 00:10:02.725 Firmware Slot Information 00:10:02.725 ========================= 00:10:02.725 Active slot: 1 00:10:02.725 Slot 1 Firmware Revision: 24.09 00:10:02.725 00:10:02.725 00:10:02.725 Commands Supported and Effects 00:10:02.725 ============================== 00:10:02.725 Admin Commands 00:10:02.725 -------------- 00:10:02.725 Get Log Page (02h): Supported 00:10:02.725 Identify (06h): Supported 00:10:02.725 Abort (08h): Supported 00:10:02.725 Set Features (09h): Supported 00:10:02.725 Get Features (0Ah): Supported 00:10:02.725 Asynchronous Event Request (0Ch): Supported 00:10:02.725 Keep Alive (18h): Supported 00:10:02.725 I/O Commands 00:10:02.725 ------------ 00:10:02.725 Flush (00h): Supported LBA-Change 00:10:02.725 Write (01h): Supported LBA-Change 00:10:02.725 Read (02h): Supported 00:10:02.725 Compare (05h): Supported 00:10:02.725 Write Zeroes (08h): Supported LBA-Change 00:10:02.725 Dataset Management (09h): Supported LBA-Change 00:10:02.725 Copy (19h): Supported LBA-Change 00:10:02.725 00:10:02.725 Error Log 00:10:02.725 ========= 00:10:02.725 00:10:02.725 Arbitration 00:10:02.725 =========== 00:10:02.725 Arbitration Burst: 1 00:10:02.725 00:10:02.725 Power Management 00:10:02.725 ================ 00:10:02.725 Number of Power States: 1 00:10:02.725 Current Power State: Power State #0 00:10:02.725 Power State #0: 00:10:02.725 Max Power: 0.00 W 00:10:02.725 Non-Operational State: Operational 00:10:02.725 Entry Latency: Not Reported 00:10:02.725 Exit Latency: Not Reported 00:10:02.725 Relative Read Throughput: 0 00:10:02.725 Relative Read Latency: 0 00:10:02.725 Relative Write Throughput: 0 00:10:02.725 Relative Write Latency: 0 00:10:02.725 Idle Power: Not Reported 00:10:02.725 Active Power: Not Reported 00:10:02.725 Non-Operational Permissive Mode: Not Supported 00:10:02.725 00:10:02.725 Health Information 00:10:02.725 ================== 00:10:02.725 Critical Warnings: 00:10:02.725 Available Spare Space: OK 00:10:02.725 Temperature: OK 00:10:02.725 Device Reliability: OK 00:10:02.725 Read Only: No 00:10:02.725 Volatile Memory Backup: OK 00:10:02.725 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:02.725 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:02.725 Available Spare: 0% 00:10:02.725 Available Sp[2024-07-15 16:02:48.637138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:02.725 [2024-07-15 16:02:48.644966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:02.725 [2024-07-15 16:02:48.645016] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:02.725 [2024-07-15 16:02:48.645034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:02.725 [2024-07-15 16:02:48.645046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:02.725 [2024-07-15 16:02:48.645057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:02.725 [2024-07-15 16:02:48.645067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:02.725 [2024-07-15 16:02:48.645154] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:02.725 [2024-07-15 16:02:48.645176] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:02.725 [2024-07-15 16:02:48.646159] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:02.725 [2024-07-15 16:02:48.646244] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:02.725 [2024-07-15 16:02:48.646273] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:02.725 [2024-07-15 16:02:48.647171] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:02.725 [2024-07-15 16:02:48.647196] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:02.725 [2024-07-15 16:02:48.647263] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:02.725 [2024-07-15 16:02:48.648467] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:02.725 are Threshold: 0% 00:10:02.725 Life Percentage Used: 0% 00:10:02.725 Data Units Read: 0 00:10:02.725 Data Units Written: 0 00:10:02.725 Host Read Commands: 0 00:10:02.725 Host Write Commands: 0 00:10:02.725 Controller Busy Time: 0 minutes 00:10:02.725 Power Cycles: 0 00:10:02.725 Power On Hours: 0 hours 00:10:02.725 Unsafe Shutdowns: 0 00:10:02.725 Unrecoverable Media Errors: 0 00:10:02.725 Lifetime Error Log Entries: 0 00:10:02.725 Warning Temperature Time: 0 minutes 00:10:02.725 Critical Temperature Time: 0 minutes 00:10:02.725 00:10:02.725 Number of Queues 00:10:02.725 ================ 00:10:02.725 Number of I/O Submission Queues: 127 00:10:02.725 Number of I/O Completion Queues: 127 00:10:02.725 00:10:02.725 Active Namespaces 00:10:02.725 ================= 00:10:02.725 Namespace ID:1 00:10:02.726 Error Recovery Timeout: Unlimited 00:10:02.726 Command Set Identifier: NVM (00h) 00:10:02.726 Deallocate: Supported 00:10:02.726 Deallocated/Unwritten Error: Not Supported 00:10:02.726 Deallocated Read Value: Unknown 00:10:02.726 Deallocate in Write Zeroes: Not Supported 00:10:02.726 Deallocated Guard Field: 0xFFFF 00:10:02.726 Flush: Supported 00:10:02.726 Reservation: Supported 00:10:02.726 Namespace Sharing Capabilities: Multiple Controllers 00:10:02.726 Size (in LBAs): 131072 (0GiB) 00:10:02.726 Capacity (in LBAs): 131072 (0GiB) 00:10:02.726 Utilization (in LBAs): 131072 (0GiB) 00:10:02.726 NGUID: 7EBEE552E0DE47BE887F09194B87CE8B 00:10:02.726 UUID: 7ebee552-e0de-47be-887f-09194b87ce8b 00:10:02.726 Thin Provisioning: Not Supported 00:10:02.726 Per-NS Atomic Units: Yes 00:10:02.726 Atomic Boundary Size (Normal): 0 00:10:02.726 Atomic Boundary Size (PFail): 0 00:10:02.726 Atomic Boundary Offset: 0 00:10:02.726 Maximum Single Source Range Length: 65535 00:10:02.726 Maximum Copy Length: 65535 00:10:02.726 Maximum Source Range Count: 1 00:10:02.726 NGUID/EUI64 Never Reused: No 00:10:02.726 Namespace Write Protected: No 00:10:02.726 Number of LBA Formats: 1 00:10:02.726 Current LBA Format: LBA Format #00 00:10:02.726 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:02.726 00:10:02.726 16:02:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:02.726 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.984 [2024-07-15 16:02:48.878799] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:08.257 Initializing NVMe Controllers 00:10:08.257 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:08.257 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:08.257 Initialization complete. Launching workers. 00:10:08.257 ======================================================== 00:10:08.257 Latency(us) 00:10:08.257 Device Information : IOPS MiB/s Average min max 00:10:08.257 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35212.07 137.55 3634.46 1158.65 9162.80 00:10:08.257 ======================================================== 00:10:08.257 Total : 35212.07 137.55 3634.46 1158.65 9162.80 00:10:08.257 00:10:08.257 [2024-07-15 16:02:53.981332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:08.257 16:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:08.257 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.257 [2024-07-15 16:02:54.223035] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:13.533 Initializing NVMe Controllers 00:10:13.533 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:13.533 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:13.533 Initialization complete. Launching workers. 00:10:13.533 ======================================================== 00:10:13.533 Latency(us) 00:10:13.533 Device Information : IOPS MiB/s Average min max 00:10:13.533 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31919.03 124.68 4009.42 1214.71 8246.10 00:10:13.533 ======================================================== 00:10:13.533 Total : 31919.03 124.68 4009.42 1214.71 8246.10 00:10:13.533 00:10:13.533 [2024-07-15 16:02:59.246082] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:13.533 16:02:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:13.533 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.533 [2024-07-15 16:02:59.457991] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:18.815 [2024-07-15 16:03:04.596122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:18.815 Initializing NVMe Controllers 00:10:18.815 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:18.815 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:18.815 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:18.815 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:18.815 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:18.815 Initialization complete. Launching workers. 00:10:18.815 Starting thread on core 2 00:10:18.815 Starting thread on core 3 00:10:18.815 Starting thread on core 1 00:10:18.815 16:03:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:18.815 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.072 [2024-07-15 16:03:04.904825] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:22.389 [2024-07-15 16:03:08.296201] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:22.389 Initializing NVMe Controllers 00:10:22.389 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:22.389 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:22.389 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:22.389 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:22.389 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:22.389 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:22.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:22.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:22.389 Initialization complete. Launching workers. 00:10:22.389 Starting thread on core 1 with urgent priority queue 00:10:22.389 Starting thread on core 2 with urgent priority queue 00:10:22.389 Starting thread on core 3 with urgent priority queue 00:10:22.389 Starting thread on core 0 with urgent priority queue 00:10:22.389 SPDK bdev Controller (SPDK2 ) core 0: 3706.00 IO/s 26.98 secs/100000 ios 00:10:22.389 SPDK bdev Controller (SPDK2 ) core 1: 4511.67 IO/s 22.16 secs/100000 ios 00:10:22.389 SPDK bdev Controller (SPDK2 ) core 2: 4467.67 IO/s 22.38 secs/100000 ios 00:10:22.389 SPDK bdev Controller (SPDK2 ) core 3: 4635.00 IO/s 21.57 secs/100000 ios 00:10:22.389 ======================================================== 00:10:22.389 00:10:22.389 16:03:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:22.389 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.647 [2024-07-15 16:03:08.586632] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:22.647 Initializing NVMe Controllers 00:10:22.647 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:22.647 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:22.647 Namespace ID: 1 size: 0GB 00:10:22.647 Initialization complete. 00:10:22.647 INFO: using host memory buffer for IO 00:10:22.647 Hello world! 00:10:22.647 [2024-07-15 16:03:08.595689] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:22.647 16:03:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:22.905 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.905 [2024-07-15 16:03:08.890297] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:24.276 Initializing NVMe Controllers 00:10:24.276 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:24.276 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:24.276 Initialization complete. Launching workers. 00:10:24.276 submit (in ns) avg, min, max = 7768.5, 3496.7, 4030398.9 00:10:24.276 complete (in ns) avg, min, max = 23004.3, 2054.4, 4016894.4 00:10:24.276 00:10:24.276 Submit histogram 00:10:24.276 ================ 00:10:24.276 Range in us Cumulative Count 00:10:24.276 3.484 - 3.508: 0.6244% ( 85) 00:10:24.276 3.508 - 3.532: 2.8720% ( 306) 00:10:24.276 3.532 - 3.556: 8.7483% ( 800) 00:10:24.277 3.556 - 3.579: 15.1976% ( 878) 00:10:24.277 3.579 - 3.603: 24.7833% ( 1305) 00:10:24.277 3.603 - 3.627: 33.1570% ( 1140) 00:10:24.277 3.627 - 3.650: 43.4700% ( 1404) 00:10:24.277 3.650 - 3.674: 49.3977% ( 807) 00:10:24.277 3.674 - 3.698: 54.2677% ( 663) 00:10:24.277 3.698 - 3.721: 58.7410% ( 609) 00:10:24.277 3.721 - 3.745: 62.6487% ( 532) 00:10:24.277 3.745 - 3.769: 66.4316% ( 515) 00:10:24.277 3.769 - 3.793: 69.4138% ( 406) 00:10:24.277 3.793 - 3.816: 73.1967% ( 515) 00:10:24.277 3.816 - 3.840: 76.6858% ( 475) 00:10:24.277 3.840 - 3.864: 80.9094% ( 575) 00:10:24.277 3.864 - 3.887: 84.0532% ( 428) 00:10:24.277 3.887 - 3.911: 86.2274% ( 296) 00:10:24.277 3.911 - 3.935: 88.2033% ( 269) 00:10:24.277 3.935 - 3.959: 89.7018% ( 204) 00:10:24.277 3.959 - 3.982: 90.8109% ( 151) 00:10:24.277 3.982 - 4.006: 91.8760% ( 145) 00:10:24.277 4.006 - 4.030: 92.7795% ( 123) 00:10:24.277 4.030 - 4.053: 93.6683% ( 121) 00:10:24.277 4.053 - 4.077: 94.3441% ( 92) 00:10:24.277 4.077 - 4.101: 94.8509% ( 69) 00:10:24.277 4.101 - 4.124: 95.2916% ( 60) 00:10:24.277 4.124 - 4.148: 95.6515% ( 49) 00:10:24.277 4.148 - 4.172: 95.9160% ( 36) 00:10:24.277 4.172 - 4.196: 96.0776% ( 22) 00:10:24.277 4.196 - 4.219: 96.2245% ( 20) 00:10:24.277 4.219 - 4.243: 96.3200% ( 13) 00:10:24.277 4.243 - 4.267: 96.4448% ( 17) 00:10:24.277 4.267 - 4.290: 96.5477% ( 14) 00:10:24.277 4.290 - 4.314: 96.6064% ( 8) 00:10:24.277 4.314 - 4.338: 96.6652% ( 8) 00:10:24.277 4.338 - 4.361: 96.7093% ( 6) 00:10:24.277 4.361 - 4.385: 96.7754% ( 9) 00:10:24.277 4.385 - 4.409: 96.7901% ( 2) 00:10:24.277 4.409 - 4.433: 96.8121% ( 3) 00:10:24.277 4.433 - 4.456: 96.8415% ( 4) 00:10:24.277 4.456 - 4.480: 96.8562% ( 2) 00:10:24.277 4.480 - 4.504: 96.8856% ( 4) 00:10:24.277 4.504 - 4.527: 96.8929% ( 1) 00:10:24.277 4.527 - 4.551: 96.9149% ( 3) 00:10:24.277 4.551 - 4.575: 96.9223% ( 1) 00:10:24.277 4.575 - 4.599: 96.9296% ( 1) 00:10:24.277 4.599 - 4.622: 96.9517% ( 3) 00:10:24.277 4.622 - 4.646: 96.9664% ( 2) 00:10:24.277 4.670 - 4.693: 96.9737% ( 1) 00:10:24.277 4.693 - 4.717: 96.9810% ( 1) 00:10:24.277 4.717 - 4.741: 96.9884% ( 1) 00:10:24.277 4.741 - 4.764: 97.0031% ( 2) 00:10:24.277 4.764 - 4.788: 97.0545% ( 7) 00:10:24.277 4.788 - 4.812: 97.1280% ( 10) 00:10:24.277 4.812 - 4.836: 97.1720% ( 6) 00:10:24.277 4.836 - 4.859: 97.2234% ( 7) 00:10:24.277 4.859 - 4.883: 97.2602% ( 5) 00:10:24.277 4.883 - 4.907: 97.3263% ( 9) 00:10:24.277 4.907 - 4.930: 97.3777% ( 7) 00:10:24.277 4.930 - 4.954: 97.4365% ( 8) 00:10:24.277 4.954 - 4.978: 97.4879% ( 7) 00:10:24.277 4.978 - 5.001: 97.5320% ( 6) 00:10:24.277 5.001 - 5.025: 97.5613% ( 4) 00:10:24.277 5.025 - 5.049: 97.5981% ( 5) 00:10:24.277 5.049 - 5.073: 97.6128% ( 2) 00:10:24.277 5.073 - 5.096: 97.6642% ( 7) 00:10:24.277 5.096 - 5.120: 97.7009% ( 5) 00:10:24.277 5.120 - 5.144: 97.7376% ( 5) 00:10:24.277 5.144 - 5.167: 97.7450% ( 1) 00:10:24.277 5.167 - 5.191: 97.7670% ( 3) 00:10:24.277 5.191 - 5.215: 97.7743% ( 1) 00:10:24.277 5.215 - 5.239: 97.7890% ( 2) 00:10:24.277 5.286 - 5.310: 97.7964% ( 1) 00:10:24.277 5.310 - 5.333: 97.8037% ( 1) 00:10:24.277 5.333 - 5.357: 97.8111% ( 1) 00:10:24.277 5.357 - 5.381: 97.8184% ( 1) 00:10:24.277 5.381 - 5.404: 97.8258% ( 1) 00:10:24.277 5.404 - 5.428: 97.8331% ( 1) 00:10:24.277 5.452 - 5.476: 97.8405% ( 1) 00:10:24.277 5.476 - 5.499: 97.8551% ( 2) 00:10:24.277 5.547 - 5.570: 97.8625% ( 1) 00:10:24.277 5.618 - 5.641: 97.8698% ( 1) 00:10:24.277 5.641 - 5.665: 97.8772% ( 1) 00:10:24.277 5.760 - 5.784: 97.8919% ( 2) 00:10:24.277 5.831 - 5.855: 97.8992% ( 1) 00:10:24.277 5.855 - 5.879: 97.9066% ( 1) 00:10:24.277 6.068 - 6.116: 97.9139% ( 1) 00:10:24.277 6.258 - 6.305: 97.9213% ( 1) 00:10:24.277 6.353 - 6.400: 97.9359% ( 2) 00:10:24.277 6.495 - 6.542: 97.9433% ( 1) 00:10:24.277 6.684 - 6.732: 97.9653% ( 3) 00:10:24.277 6.779 - 6.827: 97.9727% ( 1) 00:10:24.277 6.827 - 6.874: 97.9800% ( 1) 00:10:24.277 7.016 - 7.064: 97.9874% ( 1) 00:10:24.277 7.064 - 7.111: 97.9947% ( 1) 00:10:24.277 7.111 - 7.159: 98.0021% ( 1) 00:10:24.277 7.159 - 7.206: 98.0094% ( 1) 00:10:24.277 7.206 - 7.253: 98.0167% ( 1) 00:10:24.277 7.253 - 7.301: 98.0314% ( 2) 00:10:24.277 7.348 - 7.396: 98.0388% ( 1) 00:10:24.277 7.396 - 7.443: 98.0535% ( 2) 00:10:24.277 7.538 - 7.585: 98.0608% ( 1) 00:10:24.277 7.680 - 7.727: 98.0682% ( 1) 00:10:24.277 7.727 - 7.775: 98.0755% ( 1) 00:10:24.277 7.870 - 7.917: 98.0829% ( 1) 00:10:24.277 7.917 - 7.964: 98.0902% ( 1) 00:10:24.277 7.964 - 8.012: 98.0975% ( 1) 00:10:24.277 8.059 - 8.107: 98.1122% ( 2) 00:10:24.277 8.107 - 8.154: 98.1269% ( 2) 00:10:24.277 8.154 - 8.201: 98.1416% ( 2) 00:10:24.277 8.201 - 8.249: 98.1490% ( 1) 00:10:24.277 8.296 - 8.344: 98.1563% ( 1) 00:10:24.277 8.344 - 8.391: 98.1637% ( 1) 00:10:24.277 8.391 - 8.439: 98.1783% ( 2) 00:10:24.277 8.439 - 8.486: 98.2077% ( 4) 00:10:24.277 8.533 - 8.581: 98.2151% ( 1) 00:10:24.277 8.581 - 8.628: 98.2224% ( 1) 00:10:24.277 8.628 - 8.676: 98.2298% ( 1) 00:10:24.277 8.723 - 8.770: 98.2371% ( 1) 00:10:24.277 8.960 - 9.007: 98.2518% ( 2) 00:10:24.277 9.007 - 9.055: 98.2591% ( 1) 00:10:24.277 9.055 - 9.102: 98.2812% ( 3) 00:10:24.277 9.150 - 9.197: 98.2885% ( 1) 00:10:24.277 9.197 - 9.244: 98.2959% ( 1) 00:10:24.277 9.244 - 9.292: 98.3032% ( 1) 00:10:24.277 9.292 - 9.339: 98.3326% ( 4) 00:10:24.277 9.339 - 9.387: 98.3399% ( 1) 00:10:24.277 9.387 - 9.434: 98.3620% ( 3) 00:10:24.277 9.481 - 9.529: 98.3693% ( 1) 00:10:24.277 9.529 - 9.576: 98.3767% ( 1) 00:10:24.277 9.576 - 9.624: 98.3840% ( 1) 00:10:24.277 9.624 - 9.671: 98.3914% ( 1) 00:10:24.277 9.671 - 9.719: 98.3987% ( 1) 00:10:24.277 9.766 - 9.813: 98.4134% ( 2) 00:10:24.277 9.813 - 9.861: 98.4207% ( 1) 00:10:24.277 9.861 - 9.908: 98.4281% ( 1) 00:10:24.277 9.908 - 9.956: 98.4354% ( 1) 00:10:24.277 9.956 - 10.003: 98.4428% ( 1) 00:10:24.277 10.003 - 10.050: 98.4501% ( 1) 00:10:24.277 10.050 - 10.098: 98.4575% ( 1) 00:10:24.277 10.145 - 10.193: 98.4722% ( 2) 00:10:24.277 10.240 - 10.287: 98.4795% ( 1) 00:10:24.277 10.287 - 10.335: 98.4942% ( 2) 00:10:24.277 10.382 - 10.430: 98.5015% ( 1) 00:10:24.277 10.430 - 10.477: 98.5089% ( 1) 00:10:24.277 10.572 - 10.619: 98.5236% ( 2) 00:10:24.277 10.761 - 10.809: 98.5309% ( 1) 00:10:24.277 10.809 - 10.856: 98.5383% ( 1) 00:10:24.277 10.904 - 10.951: 98.5456% ( 1) 00:10:24.277 10.951 - 10.999: 98.5530% ( 1) 00:10:24.277 11.093 - 11.141: 98.5677% ( 2) 00:10:24.277 11.188 - 11.236: 98.5823% ( 2) 00:10:24.277 11.283 - 11.330: 98.5970% ( 2) 00:10:24.277 11.330 - 11.378: 98.6117% ( 2) 00:10:24.277 11.378 - 11.425: 98.6191% ( 1) 00:10:24.277 11.473 - 11.520: 98.6411% ( 3) 00:10:24.277 11.662 - 11.710: 98.6485% ( 1) 00:10:24.277 11.710 - 11.757: 98.6558% ( 1) 00:10:24.277 11.757 - 11.804: 98.6631% ( 1) 00:10:24.277 11.804 - 11.852: 98.6705% ( 1) 00:10:24.277 11.947 - 11.994: 98.6778% ( 1) 00:10:24.277 12.136 - 12.231: 98.6852% ( 1) 00:10:24.277 12.231 - 12.326: 98.6925% ( 1) 00:10:24.277 12.326 - 12.421: 98.7146% ( 3) 00:10:24.277 12.421 - 12.516: 98.7292% ( 2) 00:10:24.277 12.610 - 12.705: 98.7439% ( 2) 00:10:24.277 12.800 - 12.895: 98.7733% ( 4) 00:10:24.277 12.990 - 13.084: 98.7880% ( 2) 00:10:24.277 13.084 - 13.179: 98.7954% ( 1) 00:10:24.277 13.179 - 13.274: 98.8100% ( 2) 00:10:24.277 13.274 - 13.369: 98.8321% ( 3) 00:10:24.277 13.369 - 13.464: 98.8468% ( 2) 00:10:24.277 13.464 - 13.559: 98.8615% ( 2) 00:10:24.277 13.559 - 13.653: 98.8762% ( 2) 00:10:24.277 13.843 - 13.938: 98.8835% ( 1) 00:10:24.277 13.938 - 14.033: 98.8908% ( 1) 00:10:24.277 14.033 - 14.127: 98.8982% ( 1) 00:10:24.277 14.601 - 14.696: 98.9202% ( 3) 00:10:24.277 14.791 - 14.886: 98.9276% ( 1) 00:10:24.277 14.886 - 14.981: 98.9349% ( 1) 00:10:24.277 14.981 - 15.076: 98.9423% ( 1) 00:10:24.277 16.972 - 17.067: 98.9496% ( 1) 00:10:24.277 17.161 - 17.256: 98.9643% ( 2) 00:10:24.277 17.256 - 17.351: 98.9716% ( 1) 00:10:24.277 17.351 - 17.446: 98.9863% ( 2) 00:10:24.277 17.446 - 17.541: 99.0157% ( 4) 00:10:24.277 17.541 - 17.636: 99.0451% ( 4) 00:10:24.277 17.636 - 17.730: 99.1112% ( 9) 00:10:24.277 17.730 - 17.825: 99.1479% ( 5) 00:10:24.277 17.825 - 17.920: 99.2287% ( 11) 00:10:24.277 17.920 - 18.015: 99.2802% ( 7) 00:10:24.277 18.015 - 18.110: 99.3389% ( 8) 00:10:24.277 18.110 - 18.204: 99.3977% ( 8) 00:10:24.278 18.204 - 18.299: 99.4491% ( 7) 00:10:24.278 18.299 - 18.394: 99.5005% ( 7) 00:10:24.278 18.394 - 18.489: 99.5593% ( 8) 00:10:24.278 18.489 - 18.584: 99.6033% ( 6) 00:10:24.278 18.584 - 18.679: 99.6327% ( 4) 00:10:24.278 18.679 - 18.773: 99.6401% ( 1) 00:10:24.278 18.773 - 18.868: 99.6695% ( 4) 00:10:24.278 18.868 - 18.963: 99.6988% ( 4) 00:10:24.278 19.058 - 19.153: 99.7062% ( 1) 00:10:24.278 19.153 - 19.247: 99.7356% ( 4) 00:10:24.278 19.437 - 19.532: 99.7429% ( 1) 00:10:24.278 19.627 - 19.721: 99.7503% ( 1) 00:10:24.278 20.385 - 20.480: 99.7576% ( 1) 00:10:24.278 20.764 - 20.859: 99.7649% ( 1) 00:10:24.278 20.954 - 21.049: 99.7723% ( 1) 00:10:24.278 21.333 - 21.428: 99.7796% ( 1) 00:10:24.278 22.092 - 22.187: 99.7870% ( 1) 00:10:24.278 22.945 - 23.040: 99.7943% ( 1) 00:10:24.278 23.230 - 23.324: 99.8090% ( 2) 00:10:24.278 23.419 - 23.514: 99.8164% ( 1) 00:10:24.278 23.609 - 23.704: 99.8237% ( 1) 00:10:24.278 23.704 - 23.799: 99.8311% ( 1) 00:10:24.278 25.221 - 25.410: 99.8384% ( 1) 00:10:24.278 25.790 - 25.979: 99.8457% ( 1) 00:10:24.278 25.979 - 26.169: 99.8531% ( 1) 00:10:24.278 26.169 - 26.359: 99.8604% ( 1) 00:10:24.278 26.738 - 26.927: 99.8678% ( 1) 00:10:24.278 26.927 - 27.117: 99.8751% ( 1) 00:10:24.278 28.065 - 28.255: 99.8825% ( 1) 00:10:24.278 28.444 - 28.634: 99.8898% ( 1) 00:10:24.278 29.203 - 29.393: 99.8972% ( 1) 00:10:24.278 35.461 - 35.650: 99.9045% ( 1) 00:10:24.278 3980.705 - 4004.978: 99.9706% ( 9) 00:10:24.278 4004.978 - 4029.250: 99.9927% ( 3) 00:10:24.278 4029.250 - 4053.523: 100.0000% ( 1) 00:10:24.278 00:10:24.278 Complete histogram 00:10:24.278 ================== 00:10:24.278 Range in us Cumulative Count 00:10:24.278 2.050 - 2.062: 2.4166% ( 329) 00:10:24.278 2.062 - 2.074: 41.6777% ( 5345) 00:10:24.278 2.074 - 2.086: 47.2749% ( 762) 00:10:24.278 2.086 - 2.098: 51.5278% ( 579) 00:10:24.278 2.098 - 2.110: 60.5480% ( 1228) 00:10:24.278 2.110 - 2.121: 62.5312% ( 270) 00:10:24.278 2.121 - 2.133: 68.1211% ( 761) 00:10:24.278 2.133 - 2.145: 76.7078% ( 1169) 00:10:24.278 2.145 - 2.157: 77.4423% ( 100) 00:10:24.278 2.157 - 2.169: 79.9324% ( 339) 00:10:24.278 2.169 - 2.181: 82.3491% ( 329) 00:10:24.278 2.181 - 2.193: 82.9734% ( 85) 00:10:24.278 2.193 - 2.204: 85.0081% ( 277) 00:10:24.278 2.204 - 2.216: 88.8644% ( 525) 00:10:24.278 2.216 - 2.228: 90.9285% ( 281) 00:10:24.278 2.228 - 2.240: 92.4343% ( 205) 00:10:24.278 2.240 - 2.252: 93.9841% ( 211) 00:10:24.278 2.252 - 2.264: 94.3367% ( 48) 00:10:24.278 2.264 - 2.276: 94.6085% ( 37) 00:10:24.278 2.276 - 2.287: 94.9317% ( 44) 00:10:24.278 2.287 - 2.299: 95.4973% ( 77) 00:10:24.278 2.299 - 2.311: 95.7323% ( 32) 00:10:24.278 2.311 - 2.323: 95.8352% ( 14) 00:10:24.278 2.323 - 2.335: 95.8939% ( 8) 00:10:24.278 2.335 - 2.347: 95.9454% ( 7) 00:10:24.278 2.347 - 2.359: 96.0335% ( 12) 00:10:24.278 2.359 - 2.370: 96.2906% ( 35) 00:10:24.278 2.370 - 2.382: 96.5330% ( 33) 00:10:24.278 2.382 - 2.394: 96.8048% ( 37) 00:10:24.278 2.394 - 2.406: 96.9517% ( 20) 00:10:24.278 2.406 - 2.418: 97.1426% ( 26) 00:10:24.278 2.418 - 2.430: 97.3557% ( 29) 00:10:24.278 2.430 - 2.441: 97.4879% ( 18) 00:10:24.278 2.441 - 2.453: 97.6495% ( 22) 00:10:24.278 2.453 - 2.465: 97.8111% ( 22) 00:10:24.278 2.465 - 2.477: 97.9874% ( 24) 00:10:24.278 2.477 - 2.489: 98.1049% ( 16) 00:10:24.278 2.489 - 2.501: 98.1563% ( 7) 00:10:24.278 2.501 - 2.513: 98.1783% ( 3) 00:10:24.278 2.513 - 2.524: 98.2445% ( 9) 00:10:24.278 2.524 - 2.536: 98.3179% ( 10) 00:10:24.278 2.536 - 2.548: 98.3693% ( 7) 00:10:24.278 2.548 - 2.560: 98.4134% ( 6) 00:10:24.278 2.560 - 2.572: 98.4281% ( 2) 00:10:24.278 2.572 - 2.584: 98.4648% ( 5) 00:10:24.278 2.584 - 2.596: 98.4942% ( 4) 00:10:24.278 2.596 - 2.607: 98.5015% ( 1) 00:10:24.278 2.607 - 2.619: 98.5089% ( 1) 00:10:24.278 2.619 - 2.631: 98.5236% ( 2) 00:10:24.278 2.631 - 2.643: 98.5309% ( 1) 00:10:24.278 2.655 - 2.667: 98.5383% ( 1) 00:10:24.278 2.667 - 2.679: 98.5456% ( 1) 00:10:24.278 2.690 - 2.702: 98.5530% ( 1) 00:10:24.278 2.761 - 2.773: 98.5677% ( 2) 00:10:24.278 2.773 - 2.785: 98.5750% ( 1) 00:10:24.278 3.390 - 3.413: 98.5823% ( 1) 00:10:24.278 3.461 - 3.484: 98.5897% ( 1) 00:10:24.278 3.484 - 3.508: 98.6117% ( 3) 00:10:24.278 3.508 - 3.532: 98.6191% ( 1) 00:10:24.278 3.532 - 3.556: 98.6411% ( 3) 00:10:24.278 3.556 - 3.579: 98.6485% ( 1) 00:10:24.278 3.579 - 3.603: 98.6778% ( 4) 00:10:24.278 3.650 - 3.674: 98.6852% ( 1) 00:10:24.278 3.674 - 3.698: 98.6999% ( 2) 00:10:24.278 3.698 - 3.721: 98.7146% ( 2) 00:10:24.278 3.721 - 3.745: 98.7219% ( 1) 00:10:24.278 3.769 - 3.793: 98.7366% ( 2) 00:10:24.278 3.793 - 3.816: 98.7513% ( 2) 00:10:24.278 3.816 - 3.840: 98.7660% ( 2) 00:10:24.278 3.887 - 3.911: 98.7880% ( 3) 00:10:24.278 3.911 - 3.935: 98.7954% ( 1) 00:10:24.278 3.959 - 3.982: 98.8027% ( 1) 00:10:24.278 4.030 - 4.053: 98.8100% ( 1) 00:10:24.278 4.053 - 4.077: 98.8174% ( 1) 00:10:24.278 4.077 - 4.101: 98.8394% ( 3) 00:10:24.278 4.101 - 4.124: 98.8468% ( 1) 00:10:24.278 6.210 - 6.258: 98.8541% ( 1) 00:10:24.278 6.447 - 6.495: 98.8615% ( 1) 00:10:24.278 6.495 - 6.542: 98.8688% ( 1) 00:10:24.278 6.590 - 6.637: 98.8762% ( 1) 00:10:24.278 6.732 - 6.779: 98.8835% ( 1) 00:10:24.278 6.874 - 6.921: 98.8908% ( 1) 00:10:24.278 6.921 - 6.969: 98.8982% ( 1) 00:10:24.278 7.159 - 7.206: 98.9055% ( 1) 00:10:24.278 7.253 - 7.301: 98.9202% ( 2) 00:10:24.278 7.348 - 7.396: 98.9276% ( 1) 00:10:24.278 7.443 - 7.490: 98.9349% ( 1) 00:10:24.278 7.633 - 7.680: 98.9423% ( 1) 00:10:24.278 7.775 - 7.822: 98.9496% ( 1) 00:10:24.278 7.870 - 7.917: 98.9570% ( 1) 00:10:24.278 7.964 - 8.012: 98.9643% ( 1) 00:10:24.278 8.012 - 8.059: 98.9716% ( 1) 00:10:24.278 8.154 - 8.201: 98.9790% ( 1) 00:10:24.278 8.628 - 8.676: 98.9863% ( 1) 00:10:24.278 8.818 - 8.865: 98.9937% ( 1) 00:10:24.278 11.615 - 11.662: 99.0010% ( 1) 00:10:24.278 15.455 - 15.550: 99.0084% ( 1) 00:10:24.278 15.550 - 15.644: 99.0157% ( 1) 00:10:24.278 15.644 - 15.739: 99.0304% ( 2) 00:10:24.278 15.834 - 15.929: 99.0378% ( 1) 00:10:24.278 15.929 - 16.024: 99.0451% ( 1) 00:10:24.278 16.024 - 16.119: 99.0965% ( 7) 00:10:24.278 16.119 - 16.213: 99.1186% ( 3) 00:10:24.278 16.213 - 16.308: 99.1479% ( 4) 00:10:24.278 16.308 - 16.403: 99.1920% ( 6) 00:10:24.278 16.403 - 16.498: 99.2508% ( 8) 00:10:24.278 16.498 - 16.593: 99.2655% ( 2) 00:10:24.278 16.593 - 16.687: 99.2948% ( 4) 00:10:24.278 16.687 - 16.782: 99.3389% ( 6) 00:10:24.278 16.782 - 16.877: 99.3536%[2024-07-15 16:03:09.995907] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:24.278 ( 2) 00:10:24.278 16.877 - 16.972: 99.3830% ( 4) 00:10:24.278 16.972 - 17.067: 99.3903% ( 1) 00:10:24.278 17.067 - 17.161: 99.3977% ( 1) 00:10:24.278 17.256 - 17.351: 99.4050% ( 1) 00:10:24.278 17.351 - 17.446: 99.4124% ( 1) 00:10:24.278 17.446 - 17.541: 99.4197% ( 1) 00:10:24.278 17.541 - 17.636: 99.4271% ( 1) 00:10:24.278 17.825 - 17.920: 99.4344% ( 1) 00:10:24.278 18.110 - 18.204: 99.4491% ( 2) 00:10:24.278 18.489 - 18.584: 99.4564% ( 1) 00:10:24.278 20.101 - 20.196: 99.4638% ( 1) 00:10:24.278 24.273 - 24.462: 99.4711% ( 1) 00:10:24.278 25.600 - 25.790: 99.4785% ( 1) 00:10:24.278 3131.164 - 3155.437: 99.4858% ( 1) 00:10:24.278 3980.705 - 4004.978: 99.9339% ( 61) 00:10:24.278 4004.978 - 4029.250: 100.0000% ( 9) 00:10:24.278 00:10:24.278 16:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:24.278 16:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:24.278 16:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:24.278 16:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:24.278 16:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:24.536 [ 00:10:24.536 { 00:10:24.536 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:24.536 "subtype": "Discovery", 00:10:24.536 "listen_addresses": [], 00:10:24.536 "allow_any_host": true, 00:10:24.536 "hosts": [] 00:10:24.536 }, 00:10:24.536 { 00:10:24.536 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:24.536 "subtype": "NVMe", 00:10:24.536 "listen_addresses": [ 00:10:24.536 { 00:10:24.536 "trtype": "VFIOUSER", 00:10:24.536 "adrfam": "IPv4", 00:10:24.536 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:24.536 "trsvcid": "0" 00:10:24.536 } 00:10:24.536 ], 00:10:24.536 "allow_any_host": true, 00:10:24.536 "hosts": [], 00:10:24.536 "serial_number": "SPDK1", 00:10:24.536 "model_number": "SPDK bdev Controller", 00:10:24.536 "max_namespaces": 32, 00:10:24.536 "min_cntlid": 1, 00:10:24.536 "max_cntlid": 65519, 00:10:24.536 "namespaces": [ 00:10:24.536 { 00:10:24.536 "nsid": 1, 00:10:24.536 "bdev_name": "Malloc1", 00:10:24.536 "name": "Malloc1", 00:10:24.536 "nguid": "11514F81760A48BAB47388FC330E1566", 00:10:24.536 "uuid": "11514f81-760a-48ba-b473-88fc330e1566" 00:10:24.536 }, 00:10:24.536 { 00:10:24.536 "nsid": 2, 00:10:24.536 "bdev_name": "Malloc3", 00:10:24.536 "name": "Malloc3", 00:10:24.536 "nguid": "D7B351CBF59546E8A5029EC731D343C1", 00:10:24.536 "uuid": "d7b351cb-f595-46e8-a502-9ec731d343c1" 00:10:24.536 } 00:10:24.536 ] 00:10:24.536 }, 00:10:24.536 { 00:10:24.536 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:24.536 "subtype": "NVMe", 00:10:24.536 "listen_addresses": [ 00:10:24.536 { 00:10:24.536 "trtype": "VFIOUSER", 00:10:24.536 "adrfam": "IPv4", 00:10:24.536 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:24.536 "trsvcid": "0" 00:10:24.536 } 00:10:24.536 ], 00:10:24.536 "allow_any_host": true, 00:10:24.536 "hosts": [], 00:10:24.536 "serial_number": "SPDK2", 00:10:24.536 "model_number": "SPDK bdev Controller", 00:10:24.536 "max_namespaces": 32, 00:10:24.536 "min_cntlid": 1, 00:10:24.536 "max_cntlid": 65519, 00:10:24.536 "namespaces": [ 00:10:24.536 { 00:10:24.536 "nsid": 1, 00:10:24.536 "bdev_name": "Malloc2", 00:10:24.536 "name": "Malloc2", 00:10:24.536 "nguid": "7EBEE552E0DE47BE887F09194B87CE8B", 00:10:24.536 "uuid": "7ebee552-e0de-47be-887f-09194b87ce8b" 00:10:24.536 } 00:10:24.536 ] 00:10:24.536 } 00:10:24.536 ] 00:10:24.536 16:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:24.536 16:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=732952 00:10:24.536 16:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:24.536 16:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:24.536 16:03:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:24.536 16:03:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:24.536 16:03:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:24.536 16:03:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:24.536 16:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:24.536 16:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:24.536 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.536 [2024-07-15 16:03:10.461514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:24.793 Malloc4 00:10:24.793 16:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:25.050 [2024-07-15 16:03:10.814124] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:25.050 16:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:25.050 Asynchronous Event Request test 00:10:25.050 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:25.050 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:25.050 Registering asynchronous event callbacks... 00:10:25.050 Starting namespace attribute notice tests for all controllers... 00:10:25.050 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:25.050 aer_cb - Changed Namespace 00:10:25.050 Cleaning up... 00:10:25.307 [ 00:10:25.307 { 00:10:25.307 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:25.307 "subtype": "Discovery", 00:10:25.307 "listen_addresses": [], 00:10:25.307 "allow_any_host": true, 00:10:25.307 "hosts": [] 00:10:25.307 }, 00:10:25.307 { 00:10:25.307 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:25.307 "subtype": "NVMe", 00:10:25.307 "listen_addresses": [ 00:10:25.307 { 00:10:25.307 "trtype": "VFIOUSER", 00:10:25.308 "adrfam": "IPv4", 00:10:25.308 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:25.308 "trsvcid": "0" 00:10:25.308 } 00:10:25.308 ], 00:10:25.308 "allow_any_host": true, 00:10:25.308 "hosts": [], 00:10:25.308 "serial_number": "SPDK1", 00:10:25.308 "model_number": "SPDK bdev Controller", 00:10:25.308 "max_namespaces": 32, 00:10:25.308 "min_cntlid": 1, 00:10:25.308 "max_cntlid": 65519, 00:10:25.308 "namespaces": [ 00:10:25.308 { 00:10:25.308 "nsid": 1, 00:10:25.308 "bdev_name": "Malloc1", 00:10:25.308 "name": "Malloc1", 00:10:25.308 "nguid": "11514F81760A48BAB47388FC330E1566", 00:10:25.308 "uuid": "11514f81-760a-48ba-b473-88fc330e1566" 00:10:25.308 }, 00:10:25.308 { 00:10:25.308 "nsid": 2, 00:10:25.308 "bdev_name": "Malloc3", 00:10:25.308 "name": "Malloc3", 00:10:25.308 "nguid": "D7B351CBF59546E8A5029EC731D343C1", 00:10:25.308 "uuid": "d7b351cb-f595-46e8-a502-9ec731d343c1" 00:10:25.308 } 00:10:25.308 ] 00:10:25.308 }, 00:10:25.308 { 00:10:25.308 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:25.308 "subtype": "NVMe", 00:10:25.308 "listen_addresses": [ 00:10:25.308 { 00:10:25.308 "trtype": "VFIOUSER", 00:10:25.308 "adrfam": "IPv4", 00:10:25.308 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:25.308 "trsvcid": "0" 00:10:25.308 } 00:10:25.308 ], 00:10:25.308 "allow_any_host": true, 00:10:25.308 "hosts": [], 00:10:25.308 "serial_number": "SPDK2", 00:10:25.308 "model_number": "SPDK bdev Controller", 00:10:25.308 "max_namespaces": 32, 00:10:25.308 "min_cntlid": 1, 00:10:25.308 "max_cntlid": 65519, 00:10:25.308 "namespaces": [ 00:10:25.308 { 00:10:25.308 "nsid": 1, 00:10:25.308 "bdev_name": "Malloc2", 00:10:25.308 "name": "Malloc2", 00:10:25.308 "nguid": "7EBEE552E0DE47BE887F09194B87CE8B", 00:10:25.308 "uuid": "7ebee552-e0de-47be-887f-09194b87ce8b" 00:10:25.308 }, 00:10:25.308 { 00:10:25.308 "nsid": 2, 00:10:25.308 "bdev_name": "Malloc4", 00:10:25.308 "name": "Malloc4", 00:10:25.308 "nguid": "D54B091872934A00BE46C8AE2A01ED36", 00:10:25.308 "uuid": "d54b0918-7293-4a00-be46-c8ae2a01ed36" 00:10:25.308 } 00:10:25.308 ] 00:10:25.308 } 00:10:25.308 ] 00:10:25.308 16:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 732952 00:10:25.308 16:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:25.308 16:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 726729 00:10:25.308 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 726729 ']' 00:10:25.308 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 726729 00:10:25.308 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:25.308 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:25.308 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 726729 00:10:25.308 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:25.308 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:25.308 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 726729' 00:10:25.308 killing process with pid 726729 00:10:25.308 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 726729 00:10:25.308 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 726729 00:10:25.565 16:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:25.565 16:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:25.565 16:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:25.565 16:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:25.565 16:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:25.565 16:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=733096 00:10:25.565 16:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:25.566 16:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 733096' 00:10:25.566 Process pid: 733096 00:10:25.566 16:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:25.566 16:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 733096 00:10:25.566 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 733096 ']' 00:10:25.566 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.566 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:25.566 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.566 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:25.566 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:25.566 [2024-07-15 16:03:11.499423] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:25.566 [2024-07-15 16:03:11.500419] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:25.566 [2024-07-15 16:03:11.500473] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.566 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.566 [2024-07-15 16:03:11.557328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:25.825 [2024-07-15 16:03:11.660644] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.825 [2024-07-15 16:03:11.660697] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.825 [2024-07-15 16:03:11.660710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.825 [2024-07-15 16:03:11.660721] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.825 [2024-07-15 16:03:11.660730] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.825 [2024-07-15 16:03:11.660821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.825 [2024-07-15 16:03:11.660929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.825 [2024-07-15 16:03:11.661007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.825 [2024-07-15 16:03:11.661012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.825 [2024-07-15 16:03:11.756869] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:25.825 [2024-07-15 16:03:11.757109] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:25.825 [2024-07-15 16:03:11.757341] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:25.825 [2024-07-15 16:03:11.757988] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:25.825 [2024-07-15 16:03:11.758248] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:25.825 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:25.825 16:03:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:25.825 16:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:27.201 16:03:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:27.201 16:03:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:27.201 16:03:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:27.201 16:03:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:27.201 16:03:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:27.201 16:03:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:27.457 Malloc1 00:10:27.457 16:03:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:27.714 16:03:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:27.971 16:03:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:28.228 16:03:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:28.228 16:03:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:28.228 16:03:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:28.486 Malloc2 00:10:28.486 16:03:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:29.051 16:03:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:29.051 16:03:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:29.310 16:03:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:29.310 16:03:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 733096 00:10:29.310 16:03:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 733096 ']' 00:10:29.310 16:03:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 733096 00:10:29.310 16:03:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:29.310 16:03:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:29.310 16:03:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 733096 00:10:29.310 16:03:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:29.310 16:03:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:29.310 16:03:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 733096' 00:10:29.310 killing process with pid 733096 00:10:29.310 16:03:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 733096 00:10:29.310 16:03:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 733096 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:29.877 00:10:29.877 real 0m52.959s 00:10:29.877 user 3m28.877s 00:10:29.877 sys 0m4.402s 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:29.877 ************************************ 00:10:29.877 END TEST nvmf_vfio_user 00:10:29.877 ************************************ 00:10:29.877 16:03:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:29.877 16:03:15 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:29.877 16:03:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:29.877 16:03:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.877 16:03:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:29.877 ************************************ 00:10:29.877 START TEST nvmf_vfio_user_nvme_compliance 00:10:29.877 ************************************ 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:29.877 * Looking for test storage... 00:10:29.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.877 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=733688 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 733688' 00:10:29.878 Process pid: 733688 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 733688 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 733688 ']' 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:29.878 16:03:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:29.878 [2024-07-15 16:03:15.781695] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:29.878 [2024-07-15 16:03:15.781768] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.878 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.878 [2024-07-15 16:03:15.840420] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:30.136 [2024-07-15 16:03:15.947959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.136 [2024-07-15 16:03:15.948016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.136 [2024-07-15 16:03:15.948045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.136 [2024-07-15 16:03:15.948056] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.136 [2024-07-15 16:03:15.948066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.136 [2024-07-15 16:03:15.948146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.136 [2024-07-15 16:03:15.948212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.136 [2024-07-15 16:03:15.948216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.136 16:03:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:30.136 16:03:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:10:30.136 16:03:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:31.073 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:31.073 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:31.073 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:31.073 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.073 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:31.333 malloc0 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.333 16:03:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:31.333 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.333 00:10:31.333 00:10:31.333 CUnit - A unit testing framework for C - Version 2.1-3 00:10:31.333 http://cunit.sourceforge.net/ 00:10:31.333 00:10:31.333 00:10:31.333 Suite: nvme_compliance 00:10:31.333 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 16:03:17.302668] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:31.333 [2024-07-15 16:03:17.304166] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:31.333 [2024-07-15 16:03:17.304193] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:31.333 [2024-07-15 16:03:17.304208] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:31.333 [2024-07-15 16:03:17.305698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:31.592 passed 00:10:31.592 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 16:03:17.393351] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:31.592 [2024-07-15 16:03:17.396364] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:31.592 passed 00:10:31.592 Test: admin_identify_ns ...[2024-07-15 16:03:17.482892] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:31.592 [2024-07-15 16:03:17.541975] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:31.592 [2024-07-15 16:03:17.549968] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:31.592 [2024-07-15 16:03:17.571108] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:31.850 passed 00:10:31.851 Test: admin_get_features_mandatory_features ...[2024-07-15 16:03:17.655108] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:31.851 [2024-07-15 16:03:17.658129] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:31.851 passed 00:10:31.851 Test: admin_get_features_optional_features ...[2024-07-15 16:03:17.739651] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:31.851 [2024-07-15 16:03:17.742675] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:31.851 passed 00:10:31.851 Test: admin_set_features_number_of_queues ...[2024-07-15 16:03:17.827961] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.110 [2024-07-15 16:03:17.931071] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.110 passed 00:10:32.110 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 16:03:18.017225] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.110 [2024-07-15 16:03:18.020264] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.110 passed 00:10:32.110 Test: admin_get_log_page_with_lpo ...[2024-07-15 16:03:18.102516] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.370 [2024-07-15 16:03:18.170973] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:32.370 [2024-07-15 16:03:18.184035] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.370 passed 00:10:32.370 Test: fabric_property_get ...[2024-07-15 16:03:18.267031] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.370 [2024-07-15 16:03:18.268330] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:32.370 [2024-07-15 16:03:18.270056] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.370 passed 00:10:32.370 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 16:03:18.352576] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.370 [2024-07-15 16:03:18.353870] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:32.370 [2024-07-15 16:03:18.355597] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.630 passed 00:10:32.630 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 16:03:18.442106] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.630 [2024-07-15 16:03:18.523967] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:32.630 [2024-07-15 16:03:18.539970] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:32.630 [2024-07-15 16:03:18.545054] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.630 passed 00:10:32.630 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 16:03:18.628232] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.630 [2024-07-15 16:03:18.629563] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:32.630 [2024-07-15 16:03:18.631264] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.890 passed 00:10:32.890 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 16:03:18.714431] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.890 [2024-07-15 16:03:18.789981] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:32.890 [2024-07-15 16:03:18.813968] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:32.890 [2024-07-15 16:03:18.819091] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.890 passed 00:10:33.149 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 16:03:18.905287] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:33.149 [2024-07-15 16:03:18.906619] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:33.149 [2024-07-15 16:03:18.906675] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:33.149 [2024-07-15 16:03:18.908321] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.149 passed 00:10:33.149 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 16:03:18.990493] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:33.149 [2024-07-15 16:03:19.081967] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:33.149 [2024-07-15 16:03:19.089965] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:33.149 [2024-07-15 16:03:19.097968] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:33.149 [2024-07-15 16:03:19.105969] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:33.149 [2024-07-15 16:03:19.135081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.407 passed 00:10:33.407 Test: admin_create_io_sq_verify_pc ...[2024-07-15 16:03:19.220141] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:33.407 [2024-07-15 16:03:19.233979] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:33.407 [2024-07-15 16:03:19.251514] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.407 passed 00:10:33.407 Test: admin_create_io_qp_max_qps ...[2024-07-15 16:03:19.338095] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:34.813 [2024-07-15 16:03:20.450972] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:35.073 [2024-07-15 16:03:20.839424] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:35.073 passed 00:10:35.073 Test: admin_create_io_sq_shared_cq ...[2024-07-15 16:03:20.925734] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:35.073 [2024-07-15 16:03:21.056967] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:35.333 [2024-07-15 16:03:21.094055] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:35.333 passed 00:10:35.333 00:10:35.333 Run Summary: Type Total Ran Passed Failed Inactive 00:10:35.333 suites 1 1 n/a 0 0 00:10:35.333 tests 18 18 18 0 0 00:10:35.333 asserts 360 360 360 0 n/a 00:10:35.333 00:10:35.333 Elapsed time = 1.574 seconds 00:10:35.333 16:03:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 733688 00:10:35.333 16:03:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 733688 ']' 00:10:35.333 16:03:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 733688 00:10:35.333 16:03:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:10:35.333 16:03:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:35.333 16:03:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 733688 00:10:35.333 16:03:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:35.333 16:03:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:35.333 16:03:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 733688' 00:10:35.333 killing process with pid 733688 00:10:35.333 16:03:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 733688 00:10:35.333 16:03:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 733688 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:35.590 00:10:35.590 real 0m5.809s 00:10:35.590 user 0m16.245s 00:10:35.590 sys 0m0.566s 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:35.590 ************************************ 00:10:35.590 END TEST nvmf_vfio_user_nvme_compliance 00:10:35.590 ************************************ 00:10:35.590 16:03:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:35.590 16:03:21 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:35.590 16:03:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:35.590 16:03:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.590 16:03:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:35.590 ************************************ 00:10:35.590 START TEST nvmf_vfio_user_fuzz 00:10:35.590 ************************************ 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:35.590 * Looking for test storage... 00:10:35.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.590 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.591 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:35.591 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.591 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:35.591 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.591 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.591 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.591 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.591 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.591 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.591 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.591 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=734413 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 734413' 00:10:35.848 Process pid: 734413 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 734413 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 734413 ']' 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:35.848 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:36.106 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:36.106 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:10:36.106 16:03:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:37.042 malloc0 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:37.042 16:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:09.099 Fuzzing completed. Shutting down the fuzz application 00:11:09.099 00:11:09.099 Dumping successful admin opcodes: 00:11:09.099 8, 9, 10, 24, 00:11:09.099 Dumping successful io opcodes: 00:11:09.099 0, 00:11:09.099 NS: 0x200003a1ef00 I/O qp, Total commands completed: 629746, total successful commands: 2441, random_seed: 2761640640 00:11:09.099 NS: 0x200003a1ef00 admin qp, Total commands completed: 80106, total successful commands: 631, random_seed: 3759627712 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 734413 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 734413 ']' 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 734413 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 734413 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 734413' 00:11:09.099 killing process with pid 734413 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 734413 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 734413 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:09.099 00:11:09.099 real 0m32.279s 00:11:09.099 user 0m30.384s 00:11:09.099 sys 0m29.486s 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.099 16:03:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:09.099 ************************************ 00:11:09.099 END TEST nvmf_vfio_user_fuzz 00:11:09.099 ************************************ 00:11:09.099 16:03:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:09.099 16:03:53 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:09.099 16:03:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:09.099 16:03:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.099 16:03:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:09.099 ************************************ 00:11:09.099 START TEST nvmf_host_management 00:11:09.099 ************************************ 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:09.100 * Looking for test storage... 00:11:09.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:09.100 16:03:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:10.030 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:10.031 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:10.031 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:10.031 Found net devices under 0000:09:00.0: cvl_0_0 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:10.031 Found net devices under 0000:09:00.1: cvl_0_1 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.031 16:03:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:10.031 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:10.031 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.031 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:10.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:11:10.290 00:11:10.290 --- 10.0.0.2 ping statistics --- 00:11:10.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.290 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:11:10.290 00:11:10.290 --- 10.0.0.1 ping statistics --- 00:11:10.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.290 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=739869 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 739869 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 739869 ']' 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:10.290 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:10.290 [2024-07-15 16:03:56.209658] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:11:10.290 [2024-07-15 16:03:56.209750] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.290 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.290 [2024-07-15 16:03:56.275071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.548 [2024-07-15 16:03:56.387952] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.548 [2024-07-15 16:03:56.388011] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.548 [2024-07-15 16:03:56.388025] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.548 [2024-07-15 16:03:56.388036] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.548 [2024-07-15 16:03:56.388046] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.548 [2024-07-15 16:03:56.388131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.548 [2024-07-15 16:03:56.388197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.548 [2024-07-15 16:03:56.388246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:10.548 [2024-07-15 16:03:56.388249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.548 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:10.548 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:10.548 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:10.548 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:10.548 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:10.548 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.548 16:03:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:10.548 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.548 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:10.805 [2024-07-15 16:03:56.551699] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.805 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.805 16:03:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:10.805 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:10.805 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:10.805 16:03:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:10.806 Malloc0 00:11:10.806 [2024-07-15 16:03:56.612539] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=740032 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 740032 /var/tmp/bdevperf.sock 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 740032 ']' 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:10.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:10.806 { 00:11:10.806 "params": { 00:11:10.806 "name": "Nvme$subsystem", 00:11:10.806 "trtype": "$TEST_TRANSPORT", 00:11:10.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:10.806 "adrfam": "ipv4", 00:11:10.806 "trsvcid": "$NVMF_PORT", 00:11:10.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:10.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:10.806 "hdgst": ${hdgst:-false}, 00:11:10.806 "ddgst": ${ddgst:-false} 00:11:10.806 }, 00:11:10.806 "method": "bdev_nvme_attach_controller" 00:11:10.806 } 00:11:10.806 EOF 00:11:10.806 )") 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:10.806 16:03:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:10.806 "params": { 00:11:10.806 "name": "Nvme0", 00:11:10.806 "trtype": "tcp", 00:11:10.806 "traddr": "10.0.0.2", 00:11:10.806 "adrfam": "ipv4", 00:11:10.806 "trsvcid": "4420", 00:11:10.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:10.806 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:10.806 "hdgst": false, 00:11:10.806 "ddgst": false 00:11:10.806 }, 00:11:10.806 "method": "bdev_nvme_attach_controller" 00:11:10.806 }' 00:11:10.806 [2024-07-15 16:03:56.690661] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:11:10.806 [2024-07-15 16:03:56.690746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid740032 ] 00:11:10.806 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.806 [2024-07-15 16:03:56.757866] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.063 [2024-07-15 16:03:56.867524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.321 Running I/O for 10 seconds... 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.889 16:03:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:11.889 [2024-07-15 16:03:57.701888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.701981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.701999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9380 is same with the state(5) to be set 00:11:11.889 [2024-07-15 16:03:57.702885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.889 [2024-07-15 16:03:57.702927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.702969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.702989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.703981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.703998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.704014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.704032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.704048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.704065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.704081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.704098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.704114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.704131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.704147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.704164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.704180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.704197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.704213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.704230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.704253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.704270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.704286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.704303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.704319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.704336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.704355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.704373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.704389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.890 [2024-07-15 16:03:57.704406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.890 [2024-07-15 16:03:57.704422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.704439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.704455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.704473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.704489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.704507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.704523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 16:03:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.891 [2024-07-15 16:03:57.704540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.704556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.704574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.704589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.704607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.704622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.704639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.704655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.704673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:11.891 [2024-07-15 16:03:57.704689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.704709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.704725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.704742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.704761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.704779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.704796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.704813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:1 16:03:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.891 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.704833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.704850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.704865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.704882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.704898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.704915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 16:03:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:11.891 [2024-07-15 16:03:57.704931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.704964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.704982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.704999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.705015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.705033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.705048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.705066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.705081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.705099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.705115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.705132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:11.891 [2024-07-15 16:03:57.705148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.891 [2024-07-15 16:03:57.705164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38900 is same with the state(5) to be set 00:11:11.891 [2024-07-15 16:03:57.705252] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf38900 was disconnected and freed. reset controller. 00:11:11.891 [2024-07-15 16:03:57.706438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:11.891 task offset: 98304 on job bdev=Nvme0n1 fails 00:11:11.891 00:11:11.891 Latency(us) 00:11:11.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.891 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:11.891 Job: Nvme0n1 ended in about 0.49 seconds with error 00:11:11.891 Verification LBA range: start 0x0 length 0x400 00:11:11.891 Nvme0n1 : 0.49 1580.19 98.76 131.68 0.00 36450.74 5922.51 33399.09 00:11:11.891 =================================================================================================================== 00:11:11.891 Total : 1580.19 98.76 131.68 0.00 36450.74 5922.51 33399.09 00:11:11.891 [2024-07-15 16:03:57.708586] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:11.891 [2024-07-15 16:03:57.708618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb27790 (9): Bad file descriptor 00:11:11.891 16:03:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.891 16:03:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:11.891 [2024-07-15 16:03:57.759364] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:12.827 16:03:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 740032 00:11:12.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (740032) - No such process 00:11:12.827 16:03:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:12.827 16:03:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:12.827 16:03:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:12.827 16:03:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:12.827 16:03:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:12.827 16:03:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:12.827 16:03:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:12.827 16:03:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:12.827 { 00:11:12.827 "params": { 00:11:12.827 "name": "Nvme$subsystem", 00:11:12.827 "trtype": "$TEST_TRANSPORT", 00:11:12.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:12.827 "adrfam": "ipv4", 00:11:12.827 "trsvcid": "$NVMF_PORT", 00:11:12.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:12.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:12.827 "hdgst": ${hdgst:-false}, 00:11:12.827 "ddgst": ${ddgst:-false} 00:11:12.827 }, 00:11:12.827 "method": "bdev_nvme_attach_controller" 00:11:12.827 } 00:11:12.827 EOF 00:11:12.827 )") 00:11:12.827 16:03:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:12.827 16:03:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:12.827 16:03:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:12.827 16:03:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:12.827 "params": { 00:11:12.827 "name": "Nvme0", 00:11:12.827 "trtype": "tcp", 00:11:12.827 "traddr": "10.0.0.2", 00:11:12.827 "adrfam": "ipv4", 00:11:12.827 "trsvcid": "4420", 00:11:12.827 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:12.827 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:12.827 "hdgst": false, 00:11:12.827 "ddgst": false 00:11:12.827 }, 00:11:12.827 "method": "bdev_nvme_attach_controller" 00:11:12.827 }' 00:11:12.827 [2024-07-15 16:03:58.762430] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:11:12.827 [2024-07-15 16:03:58.762504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid740264 ] 00:11:12.827 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.827 [2024-07-15 16:03:58.823360] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.087 [2024-07-15 16:03:58.935621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.345 Running I/O for 1 seconds... 00:11:14.281 00:11:14.281 Latency(us) 00:11:14.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:14.281 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:14.281 Verification LBA range: start 0x0 length 0x400 00:11:14.281 Nvme0n1 : 1.01 1707.35 106.71 0.00 0.00 36875.30 5364.24 32816.55 00:11:14.281 =================================================================================================================== 00:11:14.281 Total : 1707.35 106.71 0.00 0.00 36875.30 5364.24 32816.55 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:14.540 rmmod nvme_tcp 00:11:14.540 rmmod nvme_fabrics 00:11:14.540 rmmod nvme_keyring 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 739869 ']' 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 739869 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 739869 ']' 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 739869 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 739869 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 739869' 00:11:14.540 killing process with pid 739869 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 739869 00:11:14.540 16:04:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 739869 00:11:14.798 [2024-07-15 16:04:00.785838] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:15.058 16:04:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:15.058 16:04:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:15.058 16:04:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:15.058 16:04:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:15.058 16:04:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:15.058 16:04:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.058 16:04:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.058 16:04:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.991 16:04:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:16.991 16:04:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:16.991 00:11:16.991 real 0m9.006s 00:11:16.991 user 0m21.006s 00:11:16.991 sys 0m2.754s 00:11:16.991 16:04:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:16.991 16:04:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:16.991 ************************************ 00:11:16.991 END TEST nvmf_host_management 00:11:16.991 ************************************ 00:11:16.991 16:04:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:16.991 16:04:02 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:16.991 16:04:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:16.991 16:04:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.991 16:04:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:16.991 ************************************ 00:11:16.991 START TEST nvmf_lvol 00:11:16.991 ************************************ 00:11:16.991 16:04:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:16.991 * Looking for test storage... 00:11:16.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.991 16:04:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.991 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:16.991 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.991 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.991 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.991 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.991 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.991 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.991 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.991 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.991 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.991 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.991 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:16.991 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:16.991 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:16.992 16:04:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:19.538 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:19.538 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:19.538 Found net devices under 0000:09:00.0: cvl_0_0 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:19.538 Found net devices under 0000:09:00.1: cvl_0_1 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.538 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:19.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:11:19.539 00:11:19.539 --- 10.0.0.2 ping statistics --- 00:11:19.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.539 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:11:19.539 00:11:19.539 --- 10.0.0.1 ping statistics --- 00:11:19.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.539 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=742395 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 742395 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 742395 ']' 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:19.539 [2024-07-15 16:04:05.247735] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:11:19.539 [2024-07-15 16:04:05.247823] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.539 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.539 [2024-07-15 16:04:05.311696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:19.539 [2024-07-15 16:04:05.412170] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.539 [2024-07-15 16:04:05.412229] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.539 [2024-07-15 16:04:05.412253] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.539 [2024-07-15 16:04:05.412264] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.539 [2024-07-15 16:04:05.412273] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.539 [2024-07-15 16:04:05.412361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.539 [2024-07-15 16:04:05.412421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.539 [2024-07-15 16:04:05.412424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:19.539 16:04:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:19.796 16:04:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.796 16:04:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:19.796 [2024-07-15 16:04:05.771335] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.796 16:04:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.360 16:04:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:20.360 16:04:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.617 16:04:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:20.617 16:04:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:20.875 16:04:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:21.133 16:04:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=80cb9f62-df7c-48ae-aec6-a3674dd78509 00:11:21.133 16:04:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 80cb9f62-df7c-48ae-aec6-a3674dd78509 lvol 20 00:11:21.390 16:04:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e7ad671d-29ac-4d2e-901b-153fdfffa867 00:11:21.390 16:04:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:21.736 16:04:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e7ad671d-29ac-4d2e-901b-153fdfffa867 00:11:21.736 16:04:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:21.992 [2024-07-15 16:04:07.902582] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.992 16:04:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:22.249 16:04:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=742817 00:11:22.249 16:04:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:22.249 16:04:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:22.249 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.179 16:04:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e7ad671d-29ac-4d2e-901b-153fdfffa867 MY_SNAPSHOT 00:11:23.746 16:04:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b1ebb9a3-e8ad-4b37-8c8e-c3c340938796 00:11:23.746 16:04:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e7ad671d-29ac-4d2e-901b-153fdfffa867 30 00:11:24.004 16:04:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b1ebb9a3-e8ad-4b37-8c8e-c3c340938796 MY_CLONE 00:11:24.261 16:04:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=797cdaaa-304d-4c11-b52c-e1691108c465 00:11:24.261 16:04:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 797cdaaa-304d-4c11-b52c-e1691108c465 00:11:24.826 16:04:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 742817 00:11:32.942 Initializing NVMe Controllers 00:11:32.942 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:32.942 Controller IO queue size 128, less than required. 00:11:32.942 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:32.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:32.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:32.942 Initialization complete. Launching workers. 00:11:32.942 ======================================================== 00:11:32.942 Latency(us) 00:11:32.942 Device Information : IOPS MiB/s Average min max 00:11:32.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10736.50 41.94 11922.83 1329.66 73196.32 00:11:32.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10636.50 41.55 12042.41 1862.12 67243.18 00:11:32.943 ======================================================== 00:11:32.943 Total : 21373.00 83.49 11982.34 1329.66 73196.32 00:11:32.943 00:11:32.943 16:04:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:32.943 16:04:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e7ad671d-29ac-4d2e-901b-153fdfffa867 00:11:33.200 16:04:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 80cb9f62-df7c-48ae-aec6-a3674dd78509 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:33.766 rmmod nvme_tcp 00:11:33.766 rmmod nvme_fabrics 00:11:33.766 rmmod nvme_keyring 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 742395 ']' 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 742395 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 742395 ']' 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 742395 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:33.766 16:04:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 742395 00:11:33.767 16:04:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:33.767 16:04:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:33.767 16:04:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 742395' 00:11:33.767 killing process with pid 742395 00:11:33.767 16:04:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 742395 00:11:33.767 16:04:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 742395 00:11:34.026 16:04:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:34.026 16:04:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:34.026 16:04:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:34.026 16:04:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:34.026 16:04:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:34.026 16:04:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.026 16:04:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.026 16:04:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.560 16:04:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:36.560 00:11:36.560 real 0m19.045s 00:11:36.560 user 1m4.696s 00:11:36.560 sys 0m5.631s 00:11:36.560 16:04:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.560 16:04:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:36.560 ************************************ 00:11:36.560 END TEST nvmf_lvol 00:11:36.560 ************************************ 00:11:36.560 16:04:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:36.560 16:04:21 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:36.560 16:04:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:36.560 16:04:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.560 16:04:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:36.560 ************************************ 00:11:36.560 START TEST nvmf_lvs_grow 00:11:36.560 ************************************ 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:36.560 * Looking for test storage... 00:11:36.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:36.560 16:04:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:38.461 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:38.461 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:38.461 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:38.462 Found net devices under 0000:09:00.0: cvl_0_0 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:38.462 Found net devices under 0000:09:00.1: cvl_0_1 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:38.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:11:38.462 00:11:38.462 --- 10.0.0.2 ping statistics --- 00:11:38.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.462 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:11:38.462 00:11:38.462 --- 10.0.0.1 ping statistics --- 00:11:38.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.462 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=746078 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 746078 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 746078 ']' 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:38.462 16:04:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:38.462 [2024-07-15 16:04:24.327354] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:11:38.462 [2024-07-15 16:04:24.327423] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.462 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.462 [2024-07-15 16:04:24.387434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.719 [2024-07-15 16:04:24.488286] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.719 [2024-07-15 16:04:24.488347] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.719 [2024-07-15 16:04:24.488360] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.719 [2024-07-15 16:04:24.488371] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.720 [2024-07-15 16:04:24.488388] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.720 [2024-07-15 16:04:24.488421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.720 16:04:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:38.720 16:04:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:11:38.720 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:38.720 16:04:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:38.720 16:04:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:38.720 16:04:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.720 16:04:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:38.977 [2024-07-15 16:04:24.846491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.977 16:04:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:38.977 16:04:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:38.977 16:04:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.977 16:04:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:38.977 ************************************ 00:11:38.977 START TEST lvs_grow_clean 00:11:38.977 ************************************ 00:11:38.977 16:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:11:38.977 16:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:38.977 16:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:38.977 16:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:38.977 16:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:38.977 16:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:38.977 16:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:38.977 16:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:38.977 16:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:38.977 16:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:39.236 16:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:39.236 16:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:39.495 16:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=aabffbbe-1c63-4561-86cc-83d9d05f2533 00:11:39.495 16:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aabffbbe-1c63-4561-86cc-83d9d05f2533 00:11:39.495 16:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:39.753 16:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:39.753 16:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:39.753 16:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aabffbbe-1c63-4561-86cc-83d9d05f2533 lvol 150 00:11:40.012 16:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c531b2f6-3cd2-424c-b451-43362ddcc315 00:11:40.012 16:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:40.012 16:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:40.290 [2024-07-15 16:04:26.117083] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:40.290 [2024-07-15 16:04:26.117175] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:40.290 true 00:11:40.290 16:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aabffbbe-1c63-4561-86cc-83d9d05f2533 00:11:40.290 16:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:40.547 16:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:40.547 16:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:40.806 16:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c531b2f6-3cd2-424c-b451-43362ddcc315 00:11:41.063 16:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:41.325 [2024-07-15 16:04:27.180303] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.325 16:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:41.610 16:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=746516 00:11:41.610 16:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:41.610 16:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 746516 /var/tmp/bdevperf.sock 00:11:41.610 16:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 746516 ']' 00:11:41.610 16:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:41.610 16:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:41.610 16:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.610 16:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:41.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:41.611 16:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.611 16:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:41.611 [2024-07-15 16:04:27.492579] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:11:41.611 [2024-07-15 16:04:27.492665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid746516 ] 00:11:41.611 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.611 [2024-07-15 16:04:27.552447] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.868 [2024-07-15 16:04:27.662210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.868 16:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:41.868 16:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:11:41.868 16:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:42.434 Nvme0n1 00:11:42.434 16:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:42.694 [ 00:11:42.694 { 00:11:42.694 "name": "Nvme0n1", 00:11:42.694 "aliases": [ 00:11:42.694 "c531b2f6-3cd2-424c-b451-43362ddcc315" 00:11:42.694 ], 00:11:42.694 "product_name": "NVMe disk", 00:11:42.694 "block_size": 4096, 00:11:42.694 "num_blocks": 38912, 00:11:42.694 "uuid": "c531b2f6-3cd2-424c-b451-43362ddcc315", 00:11:42.694 "assigned_rate_limits": { 00:11:42.694 "rw_ios_per_sec": 0, 00:11:42.694 "rw_mbytes_per_sec": 0, 00:11:42.694 "r_mbytes_per_sec": 0, 00:11:42.694 "w_mbytes_per_sec": 0 00:11:42.694 }, 00:11:42.694 "claimed": false, 00:11:42.694 "zoned": false, 00:11:42.694 "supported_io_types": { 00:11:42.694 "read": true, 00:11:42.694 "write": true, 00:11:42.694 "unmap": true, 00:11:42.694 "flush": true, 00:11:42.694 "reset": true, 00:11:42.694 "nvme_admin": true, 00:11:42.694 "nvme_io": true, 00:11:42.694 "nvme_io_md": false, 00:11:42.694 "write_zeroes": true, 00:11:42.694 "zcopy": false, 00:11:42.694 "get_zone_info": false, 00:11:42.694 "zone_management": false, 00:11:42.694 "zone_append": false, 00:11:42.694 "compare": true, 00:11:42.694 "compare_and_write": true, 00:11:42.694 "abort": true, 00:11:42.694 "seek_hole": false, 00:11:42.694 "seek_data": false, 00:11:42.694 "copy": true, 00:11:42.694 "nvme_iov_md": false 00:11:42.694 }, 00:11:42.694 "memory_domains": [ 00:11:42.694 { 00:11:42.694 "dma_device_id": "system", 00:11:42.694 "dma_device_type": 1 00:11:42.694 } 00:11:42.694 ], 00:11:42.694 "driver_specific": { 00:11:42.694 "nvme": [ 00:11:42.694 { 00:11:42.694 "trid": { 00:11:42.694 "trtype": "TCP", 00:11:42.694 "adrfam": "IPv4", 00:11:42.694 "traddr": "10.0.0.2", 00:11:42.694 "trsvcid": "4420", 00:11:42.694 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:42.694 }, 00:11:42.694 "ctrlr_data": { 00:11:42.694 "cntlid": 1, 00:11:42.694 "vendor_id": "0x8086", 00:11:42.694 "model_number": "SPDK bdev Controller", 00:11:42.694 "serial_number": "SPDK0", 00:11:42.694 "firmware_revision": "24.09", 00:11:42.694 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:42.694 "oacs": { 00:11:42.694 "security": 0, 00:11:42.694 "format": 0, 00:11:42.694 "firmware": 0, 00:11:42.694 "ns_manage": 0 00:11:42.694 }, 00:11:42.694 "multi_ctrlr": true, 00:11:42.694 "ana_reporting": false 00:11:42.694 }, 00:11:42.694 "vs": { 00:11:42.694 "nvme_version": "1.3" 00:11:42.694 }, 00:11:42.694 "ns_data": { 00:11:42.694 "id": 1, 00:11:42.694 "can_share": true 00:11:42.694 } 00:11:42.694 } 00:11:42.694 ], 00:11:42.694 "mp_policy": "active_passive" 00:11:42.694 } 00:11:42.694 } 00:11:42.694 ] 00:11:42.694 16:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=746650 00:11:42.694 16:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:42.694 16:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:42.694 Running I/O for 10 seconds... 00:11:44.069 Latency(us) 00:11:44.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:44.069 Nvme0n1 : 1.00 15433.00 60.29 0.00 0.00 0.00 0.00 0.00 00:11:44.069 =================================================================================================================== 00:11:44.069 Total : 15433.00 60.29 0.00 0.00 0.00 0.00 0.00 00:11:44.069 00:11:44.634 16:04:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aabffbbe-1c63-4561-86cc-83d9d05f2533 00:11:44.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:44.891 Nvme0n1 : 2.00 15559.00 60.78 0.00 0.00 0.00 0.00 0.00 00:11:44.891 =================================================================================================================== 00:11:44.891 Total : 15559.00 60.78 0.00 0.00 0.00 0.00 0.00 00:11:44.891 00:11:44.891 true 00:11:44.891 16:04:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aabffbbe-1c63-4561-86cc-83d9d05f2533 00:11:44.891 16:04:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:45.150 16:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:45.150 16:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:45.150 16:04:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 746650 00:11:45.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:45.719 Nvme0n1 : 3.00 15664.67 61.19 0.00 0.00 0.00 0.00 0.00 00:11:45.719 =================================================================================================================== 00:11:45.719 Total : 15664.67 61.19 0.00 0.00 0.00 0.00 0.00 00:11:45.719 00:11:47.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:47.096 Nvme0n1 : 4.00 15749.00 61.52 0.00 0.00 0.00 0.00 0.00 00:11:47.096 =================================================================================================================== 00:11:47.096 Total : 15749.00 61.52 0.00 0.00 0.00 0.00 0.00 00:11:47.096 00:11:48.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:48.034 Nvme0n1 : 5.00 15825.00 61.82 0.00 0.00 0.00 0.00 0.00 00:11:48.034 =================================================================================================================== 00:11:48.034 Total : 15825.00 61.82 0.00 0.00 0.00 0.00 0.00 00:11:48.034 00:11:48.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:48.972 Nvme0n1 : 6.00 15876.17 62.02 0.00 0.00 0.00 0.00 0.00 00:11:48.972 =================================================================================================================== 00:11:48.972 Total : 15876.17 62.02 0.00 0.00 0.00 0.00 0.00 00:11:48.972 00:11:49.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:49.912 Nvme0n1 : 7.00 15921.57 62.19 0.00 0.00 0.00 0.00 0.00 00:11:49.912 =================================================================================================================== 00:11:49.912 Total : 15921.57 62.19 0.00 0.00 0.00 0.00 0.00 00:11:49.912 00:11:50.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:50.846 Nvme0n1 : 8.00 15955.50 62.33 0.00 0.00 0.00 0.00 0.00 00:11:50.846 =================================================================================================================== 00:11:50.846 Total : 15955.50 62.33 0.00 0.00 0.00 0.00 0.00 00:11:50.846 00:11:51.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:51.784 Nvme0n1 : 9.00 15992.78 62.47 0.00 0.00 0.00 0.00 0.00 00:11:51.784 =================================================================================================================== 00:11:51.784 Total : 15992.78 62.47 0.00 0.00 0.00 0.00 0.00 00:11:51.784 00:11:52.719 00:11:52.719 Latency(us) 00:11:52.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:52.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.719 Nvme0n1 : 10.00 16015.81 62.56 0.00 0.00 7987.48 3252.53 15243.19 00:11:52.719 =================================================================================================================== 00:11:52.719 Total : 16015.81 62.56 0.00 0.00 7987.48 3252.53 15243.19 00:11:52.719 0 00:11:52.719 16:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 746516 00:11:52.719 16:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 746516 ']' 00:11:52.719 16:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 746516 00:11:52.719 16:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:11:52.719 16:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:52.719 16:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 746516 00:11:52.976 16:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:52.976 16:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:52.976 16:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 746516' 00:11:52.976 killing process with pid 746516 00:11:52.976 16:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 746516 00:11:52.976 Received shutdown signal, test time was about 10.000000 seconds 00:11:52.976 00:11:52.976 Latency(us) 00:11:52.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:52.976 =================================================================================================================== 00:11:52.976 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:52.976 16:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 746516 00:11:53.233 16:04:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:53.490 16:04:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:53.747 16:04:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aabffbbe-1c63-4561-86cc-83d9d05f2533 00:11:53.747 16:04:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:54.005 16:04:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:54.005 16:04:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:54.005 16:04:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:54.263 [2024-07-15 16:04:40.097779] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:54.263 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aabffbbe-1c63-4561-86cc-83d9d05f2533 00:11:54.263 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:11:54.263 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aabffbbe-1c63-4561-86cc-83d9d05f2533 00:11:54.263 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.263 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.263 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.263 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.263 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.263 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.263 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.263 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:54.263 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aabffbbe-1c63-4561-86cc-83d9d05f2533 00:11:54.520 request: 00:11:54.520 { 00:11:54.520 "uuid": "aabffbbe-1c63-4561-86cc-83d9d05f2533", 00:11:54.520 "method": "bdev_lvol_get_lvstores", 00:11:54.520 "req_id": 1 00:11:54.520 } 00:11:54.520 Got JSON-RPC error response 00:11:54.520 response: 00:11:54.520 { 00:11:54.520 "code": -19, 00:11:54.520 "message": "No such device" 00:11:54.520 } 00:11:54.520 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:11:54.520 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:54.520 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:54.520 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:54.520 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:54.778 aio_bdev 00:11:54.778 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c531b2f6-3cd2-424c-b451-43362ddcc315 00:11:54.778 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=c531b2f6-3cd2-424c-b451-43362ddcc315 00:11:54.778 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:54.778 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:11:54.778 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:54.778 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:54.778 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:55.035 16:04:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c531b2f6-3cd2-424c-b451-43362ddcc315 -t 2000 00:11:55.292 [ 00:11:55.292 { 00:11:55.292 "name": "c531b2f6-3cd2-424c-b451-43362ddcc315", 00:11:55.292 "aliases": [ 00:11:55.292 "lvs/lvol" 00:11:55.293 ], 00:11:55.293 "product_name": "Logical Volume", 00:11:55.293 "block_size": 4096, 00:11:55.293 "num_blocks": 38912, 00:11:55.293 "uuid": "c531b2f6-3cd2-424c-b451-43362ddcc315", 00:11:55.293 "assigned_rate_limits": { 00:11:55.293 "rw_ios_per_sec": 0, 00:11:55.293 "rw_mbytes_per_sec": 0, 00:11:55.293 "r_mbytes_per_sec": 0, 00:11:55.293 "w_mbytes_per_sec": 0 00:11:55.293 }, 00:11:55.293 "claimed": false, 00:11:55.293 "zoned": false, 00:11:55.293 "supported_io_types": { 00:11:55.293 "read": true, 00:11:55.293 "write": true, 00:11:55.293 "unmap": true, 00:11:55.293 "flush": false, 00:11:55.293 "reset": true, 00:11:55.293 "nvme_admin": false, 00:11:55.293 "nvme_io": false, 00:11:55.293 "nvme_io_md": false, 00:11:55.293 "write_zeroes": true, 00:11:55.293 "zcopy": false, 00:11:55.293 "get_zone_info": false, 00:11:55.293 "zone_management": false, 00:11:55.293 "zone_append": false, 00:11:55.293 "compare": false, 00:11:55.293 "compare_and_write": false, 00:11:55.293 "abort": false, 00:11:55.293 "seek_hole": true, 00:11:55.293 "seek_data": true, 00:11:55.293 "copy": false, 00:11:55.293 "nvme_iov_md": false 00:11:55.293 }, 00:11:55.293 "driver_specific": { 00:11:55.293 "lvol": { 00:11:55.293 "lvol_store_uuid": "aabffbbe-1c63-4561-86cc-83d9d05f2533", 00:11:55.293 "base_bdev": "aio_bdev", 00:11:55.293 "thin_provision": false, 00:11:55.293 "num_allocated_clusters": 38, 00:11:55.293 "snapshot": false, 00:11:55.293 "clone": false, 00:11:55.293 "esnap_clone": false 00:11:55.293 } 00:11:55.293 } 00:11:55.293 } 00:11:55.293 ] 00:11:55.293 16:04:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:11:55.293 16:04:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aabffbbe-1c63-4561-86cc-83d9d05f2533 00:11:55.293 16:04:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:55.552 16:04:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:55.552 16:04:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aabffbbe-1c63-4561-86cc-83d9d05f2533 00:11:55.552 16:04:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:55.810 16:04:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:55.810 16:04:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c531b2f6-3cd2-424c-b451-43362ddcc315 00:11:56.069 16:04:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aabffbbe-1c63-4561-86cc-83d9d05f2533 00:11:56.327 16:04:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:56.586 16:04:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:56.586 00:11:56.586 real 0m17.648s 00:11:56.586 user 0m17.127s 00:11:56.586 sys 0m1.879s 00:11:56.586 16:04:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:56.586 16:04:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:56.586 ************************************ 00:11:56.586 END TEST lvs_grow_clean 00:11:56.586 ************************************ 00:11:56.586 16:04:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:56.586 16:04:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:56.586 16:04:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:56.586 16:04:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:56.586 16:04:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:56.586 ************************************ 00:11:56.586 START TEST lvs_grow_dirty 00:11:56.586 ************************************ 00:11:56.586 16:04:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:11:56.586 16:04:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:56.586 16:04:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:56.586 16:04:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:56.586 16:04:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:56.586 16:04:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:56.586 16:04:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:56.586 16:04:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:56.586 16:04:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:56.844 16:04:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:57.102 16:04:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:57.102 16:04:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:57.360 16:04:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7923d084-fb28-43a6-9587-159c86b4d224 00:11:57.360 16:04:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7923d084-fb28-43a6-9587-159c86b4d224 00:11:57.360 16:04:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:57.619 16:04:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:57.619 16:04:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:57.619 16:04:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7923d084-fb28-43a6-9587-159c86b4d224 lvol 150 00:11:57.879 16:04:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=49aba176-ec7f-4815-b08d-add0f1f3cabd 00:11:57.879 16:04:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:57.879 16:04:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:58.141 [2024-07-15 16:04:43.906112] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:58.141 [2024-07-15 16:04:43.906192] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:58.141 true 00:11:58.141 16:04:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7923d084-fb28-43a6-9587-159c86b4d224 00:11:58.141 16:04:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:58.424 16:04:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:58.424 16:04:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:58.688 16:04:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 49aba176-ec7f-4815-b08d-add0f1f3cabd 00:11:58.688 16:04:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:58.948 [2024-07-15 16:04:44.933252] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.206 16:04:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:59.464 16:04:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=748692 00:11:59.464 16:04:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:59.464 16:04:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:59.464 16:04:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 748692 /var/tmp/bdevperf.sock 00:11:59.464 16:04:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 748692 ']' 00:11:59.464 16:04:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:59.464 16:04:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:59.464 16:04:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:59.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:59.464 16:04:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:59.464 16:04:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:59.464 [2024-07-15 16:04:45.284658] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:11:59.464 [2024-07-15 16:04:45.284731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748692 ] 00:11:59.464 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.464 [2024-07-15 16:04:45.341108] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.464 [2024-07-15 16:04:45.447552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.722 16:04:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:59.722 16:04:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:59.722 16:04:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:00.289 Nvme0n1 00:12:00.289 16:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:00.289 [ 00:12:00.289 { 00:12:00.289 "name": "Nvme0n1", 00:12:00.289 "aliases": [ 00:12:00.289 "49aba176-ec7f-4815-b08d-add0f1f3cabd" 00:12:00.289 ], 00:12:00.289 "product_name": "NVMe disk", 00:12:00.289 "block_size": 4096, 00:12:00.289 "num_blocks": 38912, 00:12:00.289 "uuid": "49aba176-ec7f-4815-b08d-add0f1f3cabd", 00:12:00.289 "assigned_rate_limits": { 00:12:00.289 "rw_ios_per_sec": 0, 00:12:00.289 "rw_mbytes_per_sec": 0, 00:12:00.289 "r_mbytes_per_sec": 0, 00:12:00.289 "w_mbytes_per_sec": 0 00:12:00.289 }, 00:12:00.289 "claimed": false, 00:12:00.289 "zoned": false, 00:12:00.289 "supported_io_types": { 00:12:00.289 "read": true, 00:12:00.289 "write": true, 00:12:00.289 "unmap": true, 00:12:00.289 "flush": true, 00:12:00.289 "reset": true, 00:12:00.289 "nvme_admin": true, 00:12:00.289 "nvme_io": true, 00:12:00.289 "nvme_io_md": false, 00:12:00.289 "write_zeroes": true, 00:12:00.289 "zcopy": false, 00:12:00.289 "get_zone_info": false, 00:12:00.289 "zone_management": false, 00:12:00.289 "zone_append": false, 00:12:00.289 "compare": true, 00:12:00.289 "compare_and_write": true, 00:12:00.289 "abort": true, 00:12:00.289 "seek_hole": false, 00:12:00.289 "seek_data": false, 00:12:00.289 "copy": true, 00:12:00.289 "nvme_iov_md": false 00:12:00.289 }, 00:12:00.289 "memory_domains": [ 00:12:00.289 { 00:12:00.289 "dma_device_id": "system", 00:12:00.289 "dma_device_type": 1 00:12:00.289 } 00:12:00.289 ], 00:12:00.289 "driver_specific": { 00:12:00.289 "nvme": [ 00:12:00.289 { 00:12:00.289 "trid": { 00:12:00.289 "trtype": "TCP", 00:12:00.289 "adrfam": "IPv4", 00:12:00.289 "traddr": "10.0.0.2", 00:12:00.289 "trsvcid": "4420", 00:12:00.289 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:00.289 }, 00:12:00.289 "ctrlr_data": { 00:12:00.289 "cntlid": 1, 00:12:00.289 "vendor_id": "0x8086", 00:12:00.289 "model_number": "SPDK bdev Controller", 00:12:00.289 "serial_number": "SPDK0", 00:12:00.289 "firmware_revision": "24.09", 00:12:00.289 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:00.289 "oacs": { 00:12:00.289 "security": 0, 00:12:00.289 "format": 0, 00:12:00.289 "firmware": 0, 00:12:00.289 "ns_manage": 0 00:12:00.289 }, 00:12:00.289 "multi_ctrlr": true, 00:12:00.289 "ana_reporting": false 00:12:00.289 }, 00:12:00.289 "vs": { 00:12:00.289 "nvme_version": "1.3" 00:12:00.289 }, 00:12:00.289 "ns_data": { 00:12:00.289 "id": 1, 00:12:00.289 "can_share": true 00:12:00.289 } 00:12:00.289 } 00:12:00.289 ], 00:12:00.289 "mp_policy": "active_passive" 00:12:00.289 } 00:12:00.289 } 00:12:00.289 ] 00:12:00.289 16:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=748811 00:12:00.289 16:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:00.289 16:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:00.548 Running I/O for 10 seconds... 00:12:01.486 Latency(us) 00:12:01.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:01.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.486 Nvme0n1 : 1.00 15495.00 60.53 0.00 0.00 0.00 0.00 0.00 00:12:01.486 =================================================================================================================== 00:12:01.486 Total : 15495.00 60.53 0.00 0.00 0.00 0.00 0.00 00:12:01.486 00:12:02.423 16:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7923d084-fb28-43a6-9587-159c86b4d224 00:12:02.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.423 Nvme0n1 : 2.00 15685.00 61.27 0.00 0.00 0.00 0.00 0.00 00:12:02.423 =================================================================================================================== 00:12:02.423 Total : 15685.00 61.27 0.00 0.00 0.00 0.00 0.00 00:12:02.423 00:12:02.681 true 00:12:02.681 16:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7923d084-fb28-43a6-9587-159c86b4d224 00:12:02.681 16:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:02.938 16:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:02.938 16:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:02.938 16:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 748811 00:12:03.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.506 Nvme0n1 : 3.00 15770.00 61.60 0.00 0.00 0.00 0.00 0.00 00:12:03.506 =================================================================================================================== 00:12:03.506 Total : 15770.00 61.60 0.00 0.00 0.00 0.00 0.00 00:12:03.506 00:12:04.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.446 Nvme0n1 : 4.00 15796.25 61.70 0.00 0.00 0.00 0.00 0.00 00:12:04.446 =================================================================================================================== 00:12:04.446 Total : 15796.25 61.70 0.00 0.00 0.00 0.00 0.00 00:12:04.446 00:12:05.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.385 Nvme0n1 : 5.00 15837.40 61.86 0.00 0.00 0.00 0.00 0.00 00:12:05.385 =================================================================================================================== 00:12:05.385 Total : 15837.40 61.86 0.00 0.00 0.00 0.00 0.00 00:12:05.385 00:12:06.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:06.763 Nvme0n1 : 6.00 15864.83 61.97 0.00 0.00 0.00 0.00 0.00 00:12:06.763 =================================================================================================================== 00:12:06.763 Total : 15864.83 61.97 0.00 0.00 0.00 0.00 0.00 00:12:06.763 00:12:07.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.700 Nvme0n1 : 7.00 15907.43 62.14 0.00 0.00 0.00 0.00 0.00 00:12:07.700 =================================================================================================================== 00:12:07.700 Total : 15907.43 62.14 0.00 0.00 0.00 0.00 0.00 00:12:07.700 00:12:08.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:08.637 Nvme0n1 : 8.00 15951.00 62.31 0.00 0.00 0.00 0.00 0.00 00:12:08.637 =================================================================================================================== 00:12:08.637 Total : 15951.00 62.31 0.00 0.00 0.00 0.00 0.00 00:12:08.637 00:12:09.574 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:09.574 Nvme0n1 : 9.00 15984.89 62.44 0.00 0.00 0.00 0.00 0.00 00:12:09.574 =================================================================================================================== 00:12:09.574 Total : 15984.89 62.44 0.00 0.00 0.00 0.00 0.00 00:12:09.574 00:12:10.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.511 Nvme0n1 : 10.00 16012.00 62.55 0.00 0.00 0.00 0.00 0.00 00:12:10.511 =================================================================================================================== 00:12:10.511 Total : 16012.00 62.55 0.00 0.00 0.00 0.00 0.00 00:12:10.511 00:12:10.511 00:12:10.511 Latency(us) 00:12:10.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.511 Nvme0n1 : 10.01 16012.60 62.55 0.00 0.00 7988.89 4344.79 15922.82 00:12:10.511 =================================================================================================================== 00:12:10.511 Total : 16012.60 62.55 0.00 0.00 7988.89 4344.79 15922.82 00:12:10.511 0 00:12:10.511 16:04:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 748692 00:12:10.511 16:04:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 748692 ']' 00:12:10.511 16:04:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 748692 00:12:10.511 16:04:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:12:10.511 16:04:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:10.511 16:04:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 748692 00:12:10.511 16:04:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:10.511 16:04:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:10.511 16:04:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 748692' 00:12:10.511 killing process with pid 748692 00:12:10.511 16:04:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 748692 00:12:10.511 Received shutdown signal, test time was about 10.000000 seconds 00:12:10.511 00:12:10.511 Latency(us) 00:12:10.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.511 =================================================================================================================== 00:12:10.511 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:10.511 16:04:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 748692 00:12:10.769 16:04:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:11.027 16:04:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:11.285 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:11.285 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7923d084-fb28-43a6-9587-159c86b4d224 00:12:11.543 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:11.543 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:11.543 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 746078 00:12:11.544 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 746078 00:12:11.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 746078 Killed "${NVMF_APP[@]}" "$@" 00:12:11.802 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:11.802 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:11.802 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:11.802 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:11.802 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:11.802 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=750063 00:12:11.802 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:11.802 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 750063 00:12:11.802 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 750063 ']' 00:12:11.802 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.802 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.802 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.802 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.802 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:11.802 [2024-07-15 16:04:57.609380] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:12:11.802 [2024-07-15 16:04:57.609458] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.802 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.802 [2024-07-15 16:04:57.675478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.802 [2024-07-15 16:04:57.780174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.802 [2024-07-15 16:04:57.780250] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.802 [2024-07-15 16:04:57.780271] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.802 [2024-07-15 16:04:57.780282] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.802 [2024-07-15 16:04:57.780292] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.802 [2024-07-15 16:04:57.780318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.060 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:12.061 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:12.061 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:12.061 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:12.061 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:12.061 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.061 16:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:12.320 [2024-07-15 16:04:58.129426] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:12.320 [2024-07-15 16:04:58.129544] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:12.320 [2024-07-15 16:04:58.129589] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:12.320 16:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:12.320 16:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 49aba176-ec7f-4815-b08d-add0f1f3cabd 00:12:12.320 16:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=49aba176-ec7f-4815-b08d-add0f1f3cabd 00:12:12.320 16:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:12.320 16:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:12.320 16:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:12.320 16:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:12.320 16:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:12.580 16:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 49aba176-ec7f-4815-b08d-add0f1f3cabd -t 2000 00:12:12.840 [ 00:12:12.840 { 00:12:12.840 "name": "49aba176-ec7f-4815-b08d-add0f1f3cabd", 00:12:12.840 "aliases": [ 00:12:12.840 "lvs/lvol" 00:12:12.840 ], 00:12:12.840 "product_name": "Logical Volume", 00:12:12.840 "block_size": 4096, 00:12:12.840 "num_blocks": 38912, 00:12:12.840 "uuid": "49aba176-ec7f-4815-b08d-add0f1f3cabd", 00:12:12.840 "assigned_rate_limits": { 00:12:12.840 "rw_ios_per_sec": 0, 00:12:12.840 "rw_mbytes_per_sec": 0, 00:12:12.840 "r_mbytes_per_sec": 0, 00:12:12.840 "w_mbytes_per_sec": 0 00:12:12.840 }, 00:12:12.840 "claimed": false, 00:12:12.840 "zoned": false, 00:12:12.840 "supported_io_types": { 00:12:12.840 "read": true, 00:12:12.840 "write": true, 00:12:12.840 "unmap": true, 00:12:12.840 "flush": false, 00:12:12.840 "reset": true, 00:12:12.840 "nvme_admin": false, 00:12:12.840 "nvme_io": false, 00:12:12.840 "nvme_io_md": false, 00:12:12.840 "write_zeroes": true, 00:12:12.840 "zcopy": false, 00:12:12.840 "get_zone_info": false, 00:12:12.840 "zone_management": false, 00:12:12.840 "zone_append": false, 00:12:12.840 "compare": false, 00:12:12.840 "compare_and_write": false, 00:12:12.840 "abort": false, 00:12:12.840 "seek_hole": true, 00:12:12.840 "seek_data": true, 00:12:12.840 "copy": false, 00:12:12.840 "nvme_iov_md": false 00:12:12.840 }, 00:12:12.840 "driver_specific": { 00:12:12.840 "lvol": { 00:12:12.840 "lvol_store_uuid": "7923d084-fb28-43a6-9587-159c86b4d224", 00:12:12.840 "base_bdev": "aio_bdev", 00:12:12.840 "thin_provision": false, 00:12:12.840 "num_allocated_clusters": 38, 00:12:12.840 "snapshot": false, 00:12:12.840 "clone": false, 00:12:12.840 "esnap_clone": false 00:12:12.840 } 00:12:12.840 } 00:12:12.840 } 00:12:12.840 ] 00:12:12.840 16:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:12.840 16:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7923d084-fb28-43a6-9587-159c86b4d224 00:12:12.840 16:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:13.100 16:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:13.100 16:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7923d084-fb28-43a6-9587-159c86b4d224 00:12:13.100 16:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:13.359 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:13.359 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:13.618 [2024-07-15 16:04:59.362745] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:13.618 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7923d084-fb28-43a6-9587-159c86b4d224 00:12:13.618 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:13.618 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7923d084-fb28-43a6-9587-159c86b4d224 00:12:13.618 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.618 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:13.618 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.618 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:13.618 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.618 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:13.618 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.618 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:13.618 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7923d084-fb28-43a6-9587-159c86b4d224 00:12:13.877 request: 00:12:13.877 { 00:12:13.877 "uuid": "7923d084-fb28-43a6-9587-159c86b4d224", 00:12:13.877 "method": "bdev_lvol_get_lvstores", 00:12:13.877 "req_id": 1 00:12:13.877 } 00:12:13.877 Got JSON-RPC error response 00:12:13.877 response: 00:12:13.877 { 00:12:13.877 "code": -19, 00:12:13.877 "message": "No such device" 00:12:13.877 } 00:12:13.877 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:13.877 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:13.877 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:13.877 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:13.877 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:14.136 aio_bdev 00:12:14.136 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 49aba176-ec7f-4815-b08d-add0f1f3cabd 00:12:14.136 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=49aba176-ec7f-4815-b08d-add0f1f3cabd 00:12:14.136 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:14.136 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:14.136 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:14.136 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:14.136 16:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:14.393 16:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 49aba176-ec7f-4815-b08d-add0f1f3cabd -t 2000 00:12:14.393 [ 00:12:14.393 { 00:12:14.393 "name": "49aba176-ec7f-4815-b08d-add0f1f3cabd", 00:12:14.393 "aliases": [ 00:12:14.393 "lvs/lvol" 00:12:14.393 ], 00:12:14.393 "product_name": "Logical Volume", 00:12:14.393 "block_size": 4096, 00:12:14.393 "num_blocks": 38912, 00:12:14.393 "uuid": "49aba176-ec7f-4815-b08d-add0f1f3cabd", 00:12:14.393 "assigned_rate_limits": { 00:12:14.393 "rw_ios_per_sec": 0, 00:12:14.393 "rw_mbytes_per_sec": 0, 00:12:14.393 "r_mbytes_per_sec": 0, 00:12:14.393 "w_mbytes_per_sec": 0 00:12:14.393 }, 00:12:14.393 "claimed": false, 00:12:14.393 "zoned": false, 00:12:14.393 "supported_io_types": { 00:12:14.393 "read": true, 00:12:14.393 "write": true, 00:12:14.393 "unmap": true, 00:12:14.393 "flush": false, 00:12:14.393 "reset": true, 00:12:14.393 "nvme_admin": false, 00:12:14.393 "nvme_io": false, 00:12:14.393 "nvme_io_md": false, 00:12:14.393 "write_zeroes": true, 00:12:14.393 "zcopy": false, 00:12:14.393 "get_zone_info": false, 00:12:14.393 "zone_management": false, 00:12:14.393 "zone_append": false, 00:12:14.393 "compare": false, 00:12:14.393 "compare_and_write": false, 00:12:14.393 "abort": false, 00:12:14.393 "seek_hole": true, 00:12:14.393 "seek_data": true, 00:12:14.393 "copy": false, 00:12:14.393 "nvme_iov_md": false 00:12:14.393 }, 00:12:14.393 "driver_specific": { 00:12:14.393 "lvol": { 00:12:14.393 "lvol_store_uuid": "7923d084-fb28-43a6-9587-159c86b4d224", 00:12:14.393 "base_bdev": "aio_bdev", 00:12:14.393 "thin_provision": false, 00:12:14.393 "num_allocated_clusters": 38, 00:12:14.393 "snapshot": false, 00:12:14.393 "clone": false, 00:12:14.393 "esnap_clone": false 00:12:14.393 } 00:12:14.393 } 00:12:14.393 } 00:12:14.393 ] 00:12:14.393 16:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:14.393 16:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7923d084-fb28-43a6-9587-159c86b4d224 00:12:14.393 16:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:14.651 16:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:14.651 16:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7923d084-fb28-43a6-9587-159c86b4d224 00:12:14.651 16:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:14.922 16:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:14.922 16:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 49aba176-ec7f-4815-b08d-add0f1f3cabd 00:12:15.211 16:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7923d084-fb28-43a6-9587-159c86b4d224 00:12:15.475 16:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:15.733 00:12:15.733 real 0m19.077s 00:12:15.733 user 0m48.755s 00:12:15.733 sys 0m4.510s 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:15.733 ************************************ 00:12:15.733 END TEST lvs_grow_dirty 00:12:15.733 ************************************ 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:15.733 nvmf_trace.0 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:15.733 16:05:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:15.733 rmmod nvme_tcp 00:12:15.990 rmmod nvme_fabrics 00:12:15.990 rmmod nvme_keyring 00:12:15.990 16:05:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:15.990 16:05:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:15.990 16:05:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:15.990 16:05:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 750063 ']' 00:12:15.990 16:05:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 750063 00:12:15.990 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 750063 ']' 00:12:15.990 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 750063 00:12:15.990 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:15.990 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:15.990 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 750063 00:12:15.990 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:15.990 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:15.990 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 750063' 00:12:15.990 killing process with pid 750063 00:12:15.990 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 750063 00:12:15.990 16:05:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 750063 00:12:16.247 16:05:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:16.247 16:05:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:16.247 16:05:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:16.247 16:05:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.247 16:05:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.247 16:05:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.247 16:05:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.247 16:05:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.154 16:05:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:18.154 00:12:18.154 real 0m42.088s 00:12:18.154 user 1m11.481s 00:12:18.154 sys 0m8.311s 00:12:18.154 16:05:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.154 16:05:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:18.154 ************************************ 00:12:18.154 END TEST nvmf_lvs_grow 00:12:18.154 ************************************ 00:12:18.154 16:05:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:18.154 16:05:04 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:18.154 16:05:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:18.154 16:05:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.154 16:05:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:18.154 ************************************ 00:12:18.154 START TEST nvmf_bdev_io_wait 00:12:18.154 ************************************ 00:12:18.154 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:18.413 * Looking for test storage... 00:12:18.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:18.413 16:05:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:20.947 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:20.948 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:20.948 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:20.948 Found net devices under 0000:09:00.0: cvl_0_0 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:20.948 Found net devices under 0000:09:00.1: cvl_0_1 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:20.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:12:20.948 00:12:20.948 --- 10.0.0.2 ping statistics --- 00:12:20.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.948 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:12:20.948 00:12:20.948 --- 10.0.0.1 ping statistics --- 00:12:20.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.948 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=752582 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 752582 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 752582 ']' 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:20.948 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:20.948 [2024-07-15 16:05:06.525464] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:12:20.949 [2024-07-15 16:05:06.525537] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.949 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.949 [2024-07-15 16:05:06.590848] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.949 [2024-07-15 16:05:06.696501] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.949 [2024-07-15 16:05:06.696570] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.949 [2024-07-15 16:05:06.696590] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.949 [2024-07-15 16:05:06.696600] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.949 [2024-07-15 16:05:06.696613] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.949 [2024-07-15 16:05:06.696702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.949 [2024-07-15 16:05:06.696813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.949 [2024-07-15 16:05:06.696940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.949 [2024-07-15 16:05:06.696943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:20.949 [2024-07-15 16:05:06.830845] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:20.949 Malloc0 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:20.949 [2024-07-15 16:05:06.891586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=752722 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=752724 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=752726 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:20.949 { 00:12:20.949 "params": { 00:12:20.949 "name": "Nvme$subsystem", 00:12:20.949 "trtype": "$TEST_TRANSPORT", 00:12:20.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:20.949 "adrfam": "ipv4", 00:12:20.949 "trsvcid": "$NVMF_PORT", 00:12:20.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:20.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:20.949 "hdgst": ${hdgst:-false}, 00:12:20.949 "ddgst": ${ddgst:-false} 00:12:20.949 }, 00:12:20.949 "method": "bdev_nvme_attach_controller" 00:12:20.949 } 00:12:20.949 EOF 00:12:20.949 )") 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=752728 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:20.949 { 00:12:20.949 "params": { 00:12:20.949 "name": "Nvme$subsystem", 00:12:20.949 "trtype": "$TEST_TRANSPORT", 00:12:20.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:20.949 "adrfam": "ipv4", 00:12:20.949 "trsvcid": "$NVMF_PORT", 00:12:20.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:20.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:20.949 "hdgst": ${hdgst:-false}, 00:12:20.949 "ddgst": ${ddgst:-false} 00:12:20.949 }, 00:12:20.949 "method": "bdev_nvme_attach_controller" 00:12:20.949 } 00:12:20.949 EOF 00:12:20.949 )") 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:20.949 { 00:12:20.949 "params": { 00:12:20.949 "name": "Nvme$subsystem", 00:12:20.949 "trtype": "$TEST_TRANSPORT", 00:12:20.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:20.949 "adrfam": "ipv4", 00:12:20.949 "trsvcid": "$NVMF_PORT", 00:12:20.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:20.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:20.949 "hdgst": ${hdgst:-false}, 00:12:20.949 "ddgst": ${ddgst:-false} 00:12:20.949 }, 00:12:20.949 "method": "bdev_nvme_attach_controller" 00:12:20.949 } 00:12:20.949 EOF 00:12:20.949 )") 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:20.949 { 00:12:20.949 "params": { 00:12:20.949 "name": "Nvme$subsystem", 00:12:20.949 "trtype": "$TEST_TRANSPORT", 00:12:20.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:20.949 "adrfam": "ipv4", 00:12:20.949 "trsvcid": "$NVMF_PORT", 00:12:20.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:20.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:20.949 "hdgst": ${hdgst:-false}, 00:12:20.949 "ddgst": ${ddgst:-false} 00:12:20.949 }, 00:12:20.949 "method": "bdev_nvme_attach_controller" 00:12:20.949 } 00:12:20.949 EOF 00:12:20.949 )") 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 752722 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:20.949 "params": { 00:12:20.949 "name": "Nvme1", 00:12:20.949 "trtype": "tcp", 00:12:20.949 "traddr": "10.0.0.2", 00:12:20.949 "adrfam": "ipv4", 00:12:20.949 "trsvcid": "4420", 00:12:20.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:20.949 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:20.949 "hdgst": false, 00:12:20.949 "ddgst": false 00:12:20.949 }, 00:12:20.949 "method": "bdev_nvme_attach_controller" 00:12:20.949 }' 00:12:20.949 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:20.950 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:20.950 "params": { 00:12:20.950 "name": "Nvme1", 00:12:20.950 "trtype": "tcp", 00:12:20.950 "traddr": "10.0.0.2", 00:12:20.950 "adrfam": "ipv4", 00:12:20.950 "trsvcid": "4420", 00:12:20.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:20.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:20.950 "hdgst": false, 00:12:20.950 "ddgst": false 00:12:20.950 }, 00:12:20.950 "method": "bdev_nvme_attach_controller" 00:12:20.950 }' 00:12:20.950 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:20.950 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:20.950 "params": { 00:12:20.950 "name": "Nvme1", 00:12:20.950 "trtype": "tcp", 00:12:20.950 "traddr": "10.0.0.2", 00:12:20.950 "adrfam": "ipv4", 00:12:20.950 "trsvcid": "4420", 00:12:20.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:20.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:20.950 "hdgst": false, 00:12:20.950 "ddgst": false 00:12:20.950 }, 00:12:20.950 "method": "bdev_nvme_attach_controller" 00:12:20.950 }' 00:12:20.950 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:20.950 16:05:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:20.950 "params": { 00:12:20.950 "name": "Nvme1", 00:12:20.950 "trtype": "tcp", 00:12:20.950 "traddr": "10.0.0.2", 00:12:20.950 "adrfam": "ipv4", 00:12:20.950 "trsvcid": "4420", 00:12:20.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:20.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:20.950 "hdgst": false, 00:12:20.950 "ddgst": false 00:12:20.950 }, 00:12:20.950 "method": "bdev_nvme_attach_controller" 00:12:20.950 }' 00:12:20.950 [2024-07-15 16:05:06.939919] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:12:20.950 [2024-07-15 16:05:06.939919] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:12:20.950 [2024-07-15 16:05:06.939944] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:12:20.950 [2024-07-15 16:05:06.939944] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:12:20.950 [2024-07-15 16:05:06.940029] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 16:05:06.940031] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 16:05:06.940029] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 16:05:06.940031] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:20.950 --proc-type=auto ] 00:12:20.950 --proc-type=auto ] 00:12:20.950 --proc-type=auto ] 00:12:21.208 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.208 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.208 [2024-07-15 16:05:07.117666] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.208 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.467 [2024-07-15 16:05:07.217707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:21.467 [2024-07-15 16:05:07.221189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.467 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.467 [2024-07-15 16:05:07.293168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.467 [2024-07-15 16:05:07.320186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:21.468 [2024-07-15 16:05:07.363548] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.468 [2024-07-15 16:05:07.389549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:21.468 [2024-07-15 16:05:07.457529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:21.726 Running I/O for 1 seconds... 00:12:21.726 Running I/O for 1 seconds... 00:12:21.726 Running I/O for 1 seconds... 00:12:21.984 Running I/O for 1 seconds... 00:12:22.553 00:12:22.553 Latency(us) 00:12:22.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.553 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:22.553 Nvme1n1 : 1.01 10657.13 41.63 0.00 0.00 11959.07 7961.41 20291.89 00:12:22.553 =================================================================================================================== 00:12:22.553 Total : 10657.13 41.63 0.00 0.00 11959.07 7961.41 20291.89 00:12:22.813 00:12:22.813 Latency(us) 00:12:22.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.813 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:22.813 Nvme1n1 : 1.01 8650.34 33.79 0.00 0.00 14725.19 8155.59 26020.22 00:12:22.813 =================================================================================================================== 00:12:22.813 Total : 8650.34 33.79 0.00 0.00 14725.19 8155.59 26020.22 00:12:22.813 00:12:22.813 Latency(us) 00:12:22.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.813 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:22.813 Nvme1n1 : 1.00 175608.86 685.97 0.00 0.00 726.09 259.41 904.15 00:12:22.813 =================================================================================================================== 00:12:22.813 Total : 175608.86 685.97 0.00 0.00 726.09 259.41 904.15 00:12:22.813 00:12:22.813 Latency(us) 00:12:22.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.813 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:22.813 Nvme1n1 : 1.01 9957.71 38.90 0.00 0.00 12805.46 6359.42 24758.04 00:12:22.813 =================================================================================================================== 00:12:22.813 Total : 9957.71 38.90 0.00 0.00 12805.46 6359.42 24758.04 00:12:23.073 16:05:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 752724 00:12:23.073 16:05:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 752726 00:12:23.073 16:05:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 752728 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:23.331 rmmod nvme_tcp 00:12:23.331 rmmod nvme_fabrics 00:12:23.331 rmmod nvme_keyring 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 752582 ']' 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 752582 00:12:23.331 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 752582 ']' 00:12:23.332 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 752582 00:12:23.332 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:23.332 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:23.332 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 752582 00:12:23.332 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:23.332 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:23.332 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 752582' 00:12:23.332 killing process with pid 752582 00:12:23.332 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 752582 00:12:23.332 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 752582 00:12:23.591 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:23.591 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:23.591 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:23.591 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:23.591 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:23.591 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.591 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.591 16:05:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.499 16:05:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:25.499 00:12:25.499 real 0m7.312s 00:12:25.499 user 0m16.892s 00:12:25.499 sys 0m3.611s 00:12:25.499 16:05:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:25.499 16:05:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:25.499 ************************************ 00:12:25.499 END TEST nvmf_bdev_io_wait 00:12:25.499 ************************************ 00:12:25.499 16:05:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:25.499 16:05:11 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:25.499 16:05:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:25.499 16:05:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.499 16:05:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:25.758 ************************************ 00:12:25.758 START TEST nvmf_queue_depth 00:12:25.758 ************************************ 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:25.758 * Looking for test storage... 00:12:25.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.758 16:05:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:25.759 16:05:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:27.659 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.659 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:27.659 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:27.659 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:27.659 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:27.659 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:27.660 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:27.660 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:27.660 Found net devices under 0000:09:00.0: cvl_0_0 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:27.660 Found net devices under 0000:09:00.1: cvl_0_1 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.660 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:27.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:12:27.917 00:12:27.917 --- 10.0.0.2 ping statistics --- 00:12:27.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.917 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:12:27.917 00:12:27.917 --- 10.0.0.1 ping statistics --- 00:12:27.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.917 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=754948 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 754948 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 754948 ']' 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:27.917 16:05:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:27.917 [2024-07-15 16:05:13.862616] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:12:27.917 [2024-07-15 16:05:13.862687] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.917 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.175 [2024-07-15 16:05:13.924540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.175 [2024-07-15 16:05:14.023799] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.175 [2024-07-15 16:05:14.023856] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.175 [2024-07-15 16:05:14.023878] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.175 [2024-07-15 16:05:14.023888] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.175 [2024-07-15 16:05:14.023897] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.175 [2024-07-15 16:05:14.023922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.175 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:28.175 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:28.175 16:05:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:28.175 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:28.175 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:28.175 16:05:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.175 16:05:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:28.175 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.175 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:28.175 [2024-07-15 16:05:14.164499] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.175 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.175 16:05:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:28.175 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.175 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:28.432 Malloc0 00:12:28.432 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.432 16:05:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:28.432 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.432 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:28.432 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.432 16:05:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:28.433 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.433 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:28.433 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.433 16:05:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.433 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.433 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:28.433 [2024-07-15 16:05:14.219544] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.433 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.433 16:05:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=754971 00:12:28.433 16:05:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:28.433 16:05:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:28.433 16:05:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 754971 /var/tmp/bdevperf.sock 00:12:28.433 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 754971 ']' 00:12:28.433 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:28.433 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:28.433 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:28.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:28.433 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:28.433 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:28.433 [2024-07-15 16:05:14.264497] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:12:28.433 [2024-07-15 16:05:14.264569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754971 ] 00:12:28.433 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.433 [2024-07-15 16:05:14.323150] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.433 [2024-07-15 16:05:14.427600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.690 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:28.690 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:28.690 16:05:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:28.690 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.690 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:28.690 NVMe0n1 00:12:28.690 16:05:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.690 16:05:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:28.948 Running I/O for 10 seconds... 00:12:38.929 00:12:38.929 Latency(us) 00:12:38.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.929 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:38.929 Verification LBA range: start 0x0 length 0x4000 00:12:38.929 NVMe0n1 : 10.08 9028.69 35.27 0.00 0.00 112999.72 20874.43 68739.98 00:12:38.929 =================================================================================================================== 00:12:38.929 Total : 9028.69 35.27 0.00 0.00 112999.72 20874.43 68739.98 00:12:38.929 0 00:12:38.929 16:05:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 754971 00:12:38.929 16:05:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 754971 ']' 00:12:38.929 16:05:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 754971 00:12:38.929 16:05:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:38.929 16:05:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:38.929 16:05:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 754971 00:12:39.193 16:05:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:39.193 16:05:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:39.193 16:05:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 754971' 00:12:39.193 killing process with pid 754971 00:12:39.193 16:05:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 754971 00:12:39.193 Received shutdown signal, test time was about 10.000000 seconds 00:12:39.193 00:12:39.193 Latency(us) 00:12:39.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.193 =================================================================================================================== 00:12:39.193 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:39.193 16:05:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 754971 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:39.453 rmmod nvme_tcp 00:12:39.453 rmmod nvme_fabrics 00:12:39.453 rmmod nvme_keyring 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 754948 ']' 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 754948 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 754948 ']' 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 754948 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 754948 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 754948' 00:12:39.453 killing process with pid 754948 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 754948 00:12:39.453 16:05:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 754948 00:12:39.738 16:05:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:39.738 16:05:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:39.738 16:05:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:39.738 16:05:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:39.738 16:05:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:39.738 16:05:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.738 16:05:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.738 16:05:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.647 16:05:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:41.647 00:12:41.647 real 0m16.128s 00:12:41.647 user 0m22.644s 00:12:41.647 sys 0m3.036s 00:12:41.647 16:05:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:41.647 16:05:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:41.647 ************************************ 00:12:41.647 END TEST nvmf_queue_depth 00:12:41.647 ************************************ 00:12:41.905 16:05:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:41.905 16:05:27 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:41.905 16:05:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:41.905 16:05:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.905 16:05:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:41.905 ************************************ 00:12:41.905 START TEST nvmf_target_multipath 00:12:41.905 ************************************ 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:41.906 * Looking for test storage... 00:12:41.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:41.906 16:05:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:44.438 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:44.438 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.438 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:44.439 Found net devices under 0000:09:00.0: cvl_0_0 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:44.439 Found net devices under 0000:09:00.1: cvl_0_1 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:44.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:12:44.439 00:12:44.439 --- 10.0.0.2 ping statistics --- 00:12:44.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.439 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:12:44.439 00:12:44.439 --- 10.0.0.1 ping statistics --- 00:12:44.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.439 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:44.439 only one NIC for nvmf test 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:44.439 16:05:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:44.439 rmmod nvme_tcp 00:12:44.439 rmmod nvme_fabrics 00:12:44.439 rmmod nvme_keyring 00:12:44.439 16:05:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:44.439 16:05:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:44.439 16:05:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:44.439 16:05:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:44.439 16:05:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:44.439 16:05:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:44.439 16:05:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:44.439 16:05:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:44.439 16:05:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:44.439 16:05:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.439 16:05:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.439 16:05:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:46.347 00:12:46.347 real 0m4.422s 00:12:46.347 user 0m0.863s 00:12:46.347 sys 0m1.548s 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:46.347 16:05:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:46.347 ************************************ 00:12:46.347 END TEST nvmf_target_multipath 00:12:46.347 ************************************ 00:12:46.347 16:05:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:46.347 16:05:32 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:46.347 16:05:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:46.347 16:05:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:46.347 16:05:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:46.347 ************************************ 00:12:46.347 START TEST nvmf_zcopy 00:12:46.347 ************************************ 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:46.347 * Looking for test storage... 00:12:46.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:46.347 16:05:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:48.254 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:48.254 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:48.254 Found net devices under 0000:09:00.0: cvl_0_0 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:48.254 Found net devices under 0000:09:00.1: cvl_0_1 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.254 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:48.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:12:48.515 00:12:48.515 --- 10.0.0.2 ping statistics --- 00:12:48.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.515 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:12:48.515 00:12:48.515 --- 10.0.0.1 ping statistics --- 00:12:48.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.515 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=760146 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 760146 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 760146 ']' 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:48.515 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:48.515 [2024-07-15 16:05:34.342612] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:12:48.515 [2024-07-15 16:05:34.342692] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.515 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.515 [2024-07-15 16:05:34.405660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.774 [2024-07-15 16:05:34.519306] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.774 [2024-07-15 16:05:34.519382] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.774 [2024-07-15 16:05:34.519397] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.774 [2024-07-15 16:05:34.519408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.774 [2024-07-15 16:05:34.519418] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.774 [2024-07-15 16:05:34.519457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:48.774 [2024-07-15 16:05:34.665656] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:48.774 [2024-07-15 16:05:34.681836] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:48.774 malloc0 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:48.774 { 00:12:48.774 "params": { 00:12:48.774 "name": "Nvme$subsystem", 00:12:48.774 "trtype": "$TEST_TRANSPORT", 00:12:48.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:48.774 "adrfam": "ipv4", 00:12:48.774 "trsvcid": "$NVMF_PORT", 00:12:48.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:48.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:48.774 "hdgst": ${hdgst:-false}, 00:12:48.774 "ddgst": ${ddgst:-false} 00:12:48.774 }, 00:12:48.774 "method": "bdev_nvme_attach_controller" 00:12:48.774 } 00:12:48.774 EOF 00:12:48.774 )") 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:48.774 16:05:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:48.774 "params": { 00:12:48.774 "name": "Nvme1", 00:12:48.774 "trtype": "tcp", 00:12:48.774 "traddr": "10.0.0.2", 00:12:48.774 "adrfam": "ipv4", 00:12:48.774 "trsvcid": "4420", 00:12:48.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:48.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:48.774 "hdgst": false, 00:12:48.774 "ddgst": false 00:12:48.774 }, 00:12:48.774 "method": "bdev_nvme_attach_controller" 00:12:48.774 }' 00:12:48.774 [2024-07-15 16:05:34.764190] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:12:48.775 [2024-07-15 16:05:34.764280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760169 ] 00:12:49.033 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.033 [2024-07-15 16:05:34.828551] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.033 [2024-07-15 16:05:34.937464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.293 Running I/O for 10 seconds... 00:12:59.278 00:12:59.278 Latency(us) 00:12:59.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.278 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:59.278 Verification LBA range: start 0x0 length 0x1000 00:12:59.278 Nvme1n1 : 10.01 6112.53 47.75 0.00 0.00 20884.56 3519.53 27962.03 00:12:59.278 =================================================================================================================== 00:12:59.278 Total : 6112.53 47.75 0.00 0.00 20884.56 3519.53 27962.03 00:12:59.535 16:05:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=761479 00:12:59.535 16:05:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:59.535 16:05:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:59.536 16:05:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:59.536 16:05:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:59.536 16:05:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:59.536 16:05:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:59.536 16:05:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:59.536 16:05:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:59.536 { 00:12:59.536 "params": { 00:12:59.536 "name": "Nvme$subsystem", 00:12:59.536 "trtype": "$TEST_TRANSPORT", 00:12:59.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:59.536 "adrfam": "ipv4", 00:12:59.536 "trsvcid": "$NVMF_PORT", 00:12:59.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:59.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:59.536 "hdgst": ${hdgst:-false}, 00:12:59.536 "ddgst": ${ddgst:-false} 00:12:59.536 }, 00:12:59.536 "method": "bdev_nvme_attach_controller" 00:12:59.536 } 00:12:59.536 EOF 00:12:59.536 )") 00:12:59.536 16:05:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:59.536 16:05:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:59.536 [2024-07-15 16:05:45.523621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.536 [2024-07-15 16:05:45.523664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.536 16:05:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:59.536 16:05:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:59.536 "params": { 00:12:59.536 "name": "Nvme1", 00:12:59.536 "trtype": "tcp", 00:12:59.536 "traddr": "10.0.0.2", 00:12:59.536 "adrfam": "ipv4", 00:12:59.536 "trsvcid": "4420", 00:12:59.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:59.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:59.536 "hdgst": false, 00:12:59.536 "ddgst": false 00:12:59.536 }, 00:12:59.536 "method": "bdev_nvme_attach_controller" 00:12:59.536 }' 00:12:59.536 [2024-07-15 16:05:45.531569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.536 [2024-07-15 16:05:45.531591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.539592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.539639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.547610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.547631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.555633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.555654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.559841] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:12:59.795 [2024-07-15 16:05:45.559904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761479 ] 00:12:59.795 [2024-07-15 16:05:45.563651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.563679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.571674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.571694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.579694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.579715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.587717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.587737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.795 [2024-07-15 16:05:45.595738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.595758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.603759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.603779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.611782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.611801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.619804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.619823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.619898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.795 [2024-07-15 16:05:45.627874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.627914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.635874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.635911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.643869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.643891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.651891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.651912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.659913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.659934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.667934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.667977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.675977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.676000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.684036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.684073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.692041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.692072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.700040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.700062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.708074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.708095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.716080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.716102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.724103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.724125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.732128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.732151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.733007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.795 [2024-07-15 16:05:45.740149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.740171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.748191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.748221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.756224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.756278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.764266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.764307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.772295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.772339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.780329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.795 [2024-07-15 16:05:45.780374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.795 [2024-07-15 16:05:45.788328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.796 [2024-07-15 16:05:45.788371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.796 [2024-07-15 16:05:45.796366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.796 [2024-07-15 16:05:45.796425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.054 [2024-07-15 16:05:45.804340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.054 [2024-07-15 16:05:45.804363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.054 [2024-07-15 16:05:45.812416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.054 [2024-07-15 16:05:45.812456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.054 [2024-07-15 16:05:45.820426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.054 [2024-07-15 16:05:45.820468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.054 [2024-07-15 16:05:45.828411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.054 [2024-07-15 16:05:45.828432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.054 [2024-07-15 16:05:45.836426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.054 [2024-07-15 16:05:45.836446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.054 [2024-07-15 16:05:45.844485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.054 [2024-07-15 16:05:45.844508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.054 [2024-07-15 16:05:45.852509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.054 [2024-07-15 16:05:45.852532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.054 [2024-07-15 16:05:45.860516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.054 [2024-07-15 16:05:45.860550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.054 [2024-07-15 16:05:45.868548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.054 [2024-07-15 16:05:45.868574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.054 [2024-07-15 16:05:45.876558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.054 [2024-07-15 16:05:45.876580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.054 [2024-07-15 16:05:45.884595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.054 [2024-07-15 16:05:45.884618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.054 [2024-07-15 16:05:45.892617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.054 [2024-07-15 16:05:45.892638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.054 [2024-07-15 16:05:45.900638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.054 [2024-07-15 16:05:45.900658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.054 [2024-07-15 16:05:45.908662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.054 [2024-07-15 16:05:45.908682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.054 [2024-07-15 16:05:45.916682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.054 [2024-07-15 16:05:45.916702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.054 [2024-07-15 16:05:45.924701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.054 [2024-07-15 16:05:45.924726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.055 [2024-07-15 16:05:45.932732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.055 [2024-07-15 16:05:45.932758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.055 [2024-07-15 16:05:45.940751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.055 [2024-07-15 16:05:45.940775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.055 [2024-07-15 16:05:45.948779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.055 [2024-07-15 16:05:45.948801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.055 [2024-07-15 16:05:45.956799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.055 [2024-07-15 16:05:45.956820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.055 [2024-07-15 16:05:45.964808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.055 [2024-07-15 16:05:45.964845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.055 [2024-07-15 16:05:45.972835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.055 [2024-07-15 16:05:45.972874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.055 [2024-07-15 16:05:45.980858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.055 [2024-07-15 16:05:45.980895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.055 [2024-07-15 16:05:45.988880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.055 [2024-07-15 16:05:45.988917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.055 [2024-07-15 16:05:45.996903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.055 [2024-07-15 16:05:45.996939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.055 [2024-07-15 16:05:46.004924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.055 [2024-07-15 16:05:46.004971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.055 [2024-07-15 16:05:46.012972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.055 [2024-07-15 16:05:46.013002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.055 [2024-07-15 16:05:46.020987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.055 [2024-07-15 16:05:46.021011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.055 [2024-07-15 16:05:46.029019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.055 [2024-07-15 16:05:46.029046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.055 Running I/O for 5 seconds... 00:13:00.055 [2024-07-15 16:05:46.041098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.055 [2024-07-15 16:05:46.041128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.055 [2024-07-15 16:05:46.051397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.055 [2024-07-15 16:05:46.051428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.064133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.064165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.076354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.076383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.088176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.088206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.100349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.100377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.111996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.112024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.123426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.123455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.134789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.134817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.146361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.146389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.157837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.157866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.170773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.170802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.181334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.181362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.193302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.193331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.204584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.204612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.215633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.215660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.227466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.227494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.239127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.239156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.250985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.251015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.262837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.262865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.274964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.274992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.286816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.286844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.298062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.298091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.313 [2024-07-15 16:05:46.309321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.313 [2024-07-15 16:05:46.309350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.320980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.321010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.331976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.332005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.345484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.345511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.356233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.356277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.367393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.367420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.379013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.379042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.390291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.390335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.401595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.401623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.412778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.412806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.424585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.424613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.435980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.436009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.449780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.449808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.460916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.460967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.471688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.471716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.483196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.483225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.494694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.494722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.507027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.507057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.519038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.519068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.530736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.530765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.544041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.544071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.555020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.555048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.571 [2024-07-15 16:05:46.567018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.571 [2024-07-15 16:05:46.567047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.578719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.578748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.591797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.591825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.602400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.602428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.613835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.613863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.625343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.625371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.636898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.636927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.648417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.648446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.659447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.659483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.672785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.672813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.683008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.683036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.695080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.695109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.706666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.706694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.718263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.718291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.729813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.729841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.741089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.741117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.752396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.752423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.763897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.763925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.775088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.775117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.786622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.786650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.798246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.798288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.809307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.809335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.820834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.820862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.829 [2024-07-15 16:05:46.831874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.829 [2024-07-15 16:05:46.831902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:46.843446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:46.843475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:46.855197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:46.855225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:46.866616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:46.866643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:46.879894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:46.879944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:46.890764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:46.890792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:46.901928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:46.901983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:46.913259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:46.913289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:46.924574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:46.924602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:46.936039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:46.936067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:46.947877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:46.947906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:46.959248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:46.959292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:46.970742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:46.970770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:46.982395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:46.982423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:46.994172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:46.994201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:47.005358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:47.005386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:47.016584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:47.016628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:47.028182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:47.028211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:47.039882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:47.039909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:47.053201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:47.053230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:47.064228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:47.064258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:47.075514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:47.075542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.088 [2024-07-15 16:05:47.088583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.088 [2024-07-15 16:05:47.088610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.098811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.098855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.110441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.110469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.121491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.121519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.132841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.132870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.144607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.144636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.155616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.155644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.167209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.167237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.178495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.178523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.190385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.190413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.201923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.201975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.213186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.213214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.224506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.224535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.235613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.235641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.247352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.247380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.258635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.258662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.269678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.269705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.281334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.281362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.292137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.292166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.303020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.303048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.313898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.313950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.324688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.324716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.335801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.335832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.349 [2024-07-15 16:05:47.347355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.349 [2024-07-15 16:05:47.347383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.609 [2024-07-15 16:05:47.359234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.609 [2024-07-15 16:05:47.359278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.609 [2024-07-15 16:05:47.371194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.609 [2024-07-15 16:05:47.371223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.609 [2024-07-15 16:05:47.382741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.382769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.394476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.394504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.407753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.407781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.419303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.419345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.431823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.431851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.441084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.441112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.454438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.454466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.464805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.464848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.475989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.476017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.487446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.487489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.499124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.499152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.510304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.510331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.521804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.521832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.533513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.533563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.545270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.545298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.556368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.556396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.567885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.567912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.581353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.581381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.591672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.591700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.610 [2024-07-15 16:05:47.604085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.610 [2024-07-15 16:05:47.604114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.615891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.615920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.627291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.627319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.638758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.638786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.650214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.650243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.661772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.661801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.674964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.674993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.685878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.685907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.697375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.697404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.708115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.708143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.719672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.719700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.732757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.732785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.743923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.743980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.755603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.755631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.767429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.767456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.779111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.779149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.792427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.792454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.803486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.803513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.814613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.814641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.826107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.826134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.836737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.836764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.847893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.847934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.858953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.858992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.869 [2024-07-15 16:05:47.870668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.869 [2024-07-15 16:05:47.870695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:47.882243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:47.882287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:47.893082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:47.893110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:47.904706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:47.904732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:47.915865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:47.915892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:47.927194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:47.927222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:47.938392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:47.938419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:47.950222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:47.950250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:47.961855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:47.961883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:47.973068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:47.973096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:47.986517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:47.986544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:47.998081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:47.998109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:48.011571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:48.011598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:48.022541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:48.022568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:48.034020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:48.034047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:48.044996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:48.045023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:48.056168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:48.056195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:48.067250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:48.067277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:48.078717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:48.078744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:48.089840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:48.089867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:48.101193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:48.101221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:48.112136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:48.112164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.127 [2024-07-15 16:05:48.123557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.127 [2024-07-15 16:05:48.123583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.385 [2024-07-15 16:05:48.135080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.385 [2024-07-15 16:05:48.135109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.385 [2024-07-15 16:05:48.146521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.385 [2024-07-15 16:05:48.146547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.385 [2024-07-15 16:05:48.158005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.385 [2024-07-15 16:05:48.158032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.385 [2024-07-15 16:05:48.169800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.385 [2024-07-15 16:05:48.169827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.385 [2024-07-15 16:05:48.181642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.385 [2024-07-15 16:05:48.181669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.193155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.193183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.205116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.205144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.216685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.216712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.228279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.228306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.239711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.239738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.251395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.251423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.262796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.262823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.274134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.274161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.285138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.285166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.296375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.296402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.309353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.309380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.320221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.320265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.331708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.331735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.343048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.343075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.354514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.354541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.365730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.365757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.376754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.376780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.386 [2024-07-15 16:05:48.387588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.386 [2024-07-15 16:05:48.387615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.398629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.398663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.409756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.409783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.420859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.420886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.432018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.432046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.443146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.443175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.454650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.454676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.466120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.466147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.479109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.479136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.489086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.489114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.501031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.501058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.512721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.512749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.524236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.524279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.535768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.535795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.547124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.547151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.559082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.559110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.570893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.570921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.582213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.582241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.593339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.593366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.604928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.604964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.616249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.616299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.629572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.629599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.645 [2024-07-15 16:05:48.640200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.645 [2024-07-15 16:05:48.640228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.651807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.651850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.663569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.663596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.677041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.677069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.688094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.688122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.699177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.699205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.712177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.712204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.722825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.722852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.735132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.735160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.746482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.746524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.757866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.757893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.769314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.769341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.780872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.780901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.792289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.792316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.803807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.803835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.815592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.815620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.827304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.827347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.838848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.838884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.850310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.850338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.861611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.861638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.873174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.873203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.884427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.884455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.895734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.895761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.905 [2024-07-15 16:05:48.907384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.905 [2024-07-15 16:05:48.907413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.164 [2024-07-15 16:05:48.918536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.164 [2024-07-15 16:05:48.918564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.164 [2024-07-15 16:05:48.929930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.164 [2024-07-15 16:05:48.929980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.164 [2024-07-15 16:05:48.941514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.164 [2024-07-15 16:05:48.941540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.164 [2024-07-15 16:05:48.953404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.164 [2024-07-15 16:05:48.953430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.164 [2024-07-15 16:05:48.964624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.164 [2024-07-15 16:05:48.964652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.164 [2024-07-15 16:05:48.975996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.164 [2024-07-15 16:05:48.976025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.164 [2024-07-15 16:05:48.987141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.165 [2024-07-15 16:05:48.987169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.165 [2024-07-15 16:05:48.998340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.165 [2024-07-15 16:05:48.998367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.165 [2024-07-15 16:05:49.009740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.165 [2024-07-15 16:05:49.009768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.165 [2024-07-15 16:05:49.021390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.165 [2024-07-15 16:05:49.021417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.165 [2024-07-15 16:05:49.032785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.165 [2024-07-15 16:05:49.032812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.165 [2024-07-15 16:05:49.044663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.165 [2024-07-15 16:05:49.044691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.165 [2024-07-15 16:05:49.056358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.165 [2024-07-15 16:05:49.056396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.165 [2024-07-15 16:05:49.067651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.165 [2024-07-15 16:05:49.067678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.165 [2024-07-15 16:05:49.079092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.165 [2024-07-15 16:05:49.079119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.165 [2024-07-15 16:05:49.090530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.165 [2024-07-15 16:05:49.090557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.165 [2024-07-15 16:05:49.101816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.165 [2024-07-15 16:05:49.101842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.165 [2024-07-15 16:05:49.113126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.165 [2024-07-15 16:05:49.113154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.165 [2024-07-15 16:05:49.124846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.165 [2024-07-15 16:05:49.124873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.165 [2024-07-15 16:05:49.135638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.165 [2024-07-15 16:05:49.135664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.165 [2024-07-15 16:05:49.148669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.165 [2024-07-15 16:05:49.148696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.165 [2024-07-15 16:05:49.159362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.165 [2024-07-15 16:05:49.159389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.423 [2024-07-15 16:05:49.170457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.423 [2024-07-15 16:05:49.170485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.423 [2024-07-15 16:05:49.181642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.423 [2024-07-15 16:05:49.181669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.423 [2024-07-15 16:05:49.193123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.423 [2024-07-15 16:05:49.193150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.423 [2024-07-15 16:05:49.204122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.423 [2024-07-15 16:05:49.204150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.423 [2024-07-15 16:05:49.215549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.423 [2024-07-15 16:05:49.215577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.423 [2024-07-15 16:05:49.227080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.423 [2024-07-15 16:05:49.227108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.423 [2024-07-15 16:05:49.238993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.423 [2024-07-15 16:05:49.239020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.423 [2024-07-15 16:05:49.250759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.423 [2024-07-15 16:05:49.250786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.423 [2024-07-15 16:05:49.262291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.423 [2024-07-15 16:05:49.262318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.423 [2024-07-15 16:05:49.273485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.423 [2024-07-15 16:05:49.273521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.423 [2024-07-15 16:05:49.284919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.423 [2024-07-15 16:05:49.284970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.423 [2024-07-15 16:05:49.296163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.423 [2024-07-15 16:05:49.296190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.423 [2024-07-15 16:05:49.307153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.423 [2024-07-15 16:05:49.307180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.424 [2024-07-15 16:05:49.318522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.424 [2024-07-15 16:05:49.318549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.424 [2024-07-15 16:05:49.330436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.424 [2024-07-15 16:05:49.330462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.424 [2024-07-15 16:05:49.341372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.424 [2024-07-15 16:05:49.341398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.424 [2024-07-15 16:05:49.352873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.424 [2024-07-15 16:05:49.352900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.424 [2024-07-15 16:05:49.364797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.424 [2024-07-15 16:05:49.364824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.424 [2024-07-15 16:05:49.375922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.424 [2024-07-15 16:05:49.375975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.424 [2024-07-15 16:05:49.387819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.424 [2024-07-15 16:05:49.387846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.424 [2024-07-15 16:05:49.399541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.424 [2024-07-15 16:05:49.399568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.424 [2024-07-15 16:05:49.412346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.424 [2024-07-15 16:05:49.412373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.424 [2024-07-15 16:05:49.422371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.424 [2024-07-15 16:05:49.422399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.434544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.434572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.445804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.445831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.457508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.457537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.468709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.468736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.479507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.479534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.490875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.490903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.501768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.501796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.513387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.513415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.524565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.524593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.536443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.536471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.548126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.548154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.559267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.559294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.570971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.570999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.581931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.581966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.593378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.593406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.605649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.605677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.616778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.616806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.628251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.628279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.639896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.639924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.653243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.653271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.664543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.664572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.682 [2024-07-15 16:05:49.676065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.682 [2024-07-15 16:05:49.676093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.689376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.689405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.699665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.699692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.711046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.711074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.722788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.722816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.734503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.734530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.745804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.745832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.757470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.757499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.768530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.768558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.779523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.779551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.790279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.790308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.803311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.803338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.813659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.813686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.825076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.825104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.836707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.836735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.847892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.847919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.858723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.858750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.871697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.871725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.881874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.881902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.893479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.893507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.904567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.904595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.915289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.915317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.926542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.926571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.941 [2024-07-15 16:05:49.939484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.941 [2024-07-15 16:05:49.939512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:49.950199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:49.950229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:49.960749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:49.960777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:49.971553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:49.971581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:49.983379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:49.983414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:49.995322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:49.995351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:50.007094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:50.007125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:50.018616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:50.018647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:50.030678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:50.030721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:50.042484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:50.042512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:50.054448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:50.054476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:50.066371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:50.066400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:50.078495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:50.078522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:50.090134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:50.090162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:50.102148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:50.102175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:50.113689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:50.113718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:50.125701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:50.125728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:50.137322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:50.137357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:50.149418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:50.149445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:50.161043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:50.161072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:50.172771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:50.172798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:50.184650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:50.184688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.201 [2024-07-15 16:05:50.195633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.201 [2024-07-15 16:05:50.195660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.460 [2024-07-15 16:05:50.207246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.460 [2024-07-15 16:05:50.207290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.460 [2024-07-15 16:05:50.218200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.460 [2024-07-15 16:05:50.218229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.460 [2024-07-15 16:05:50.229491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.460 [2024-07-15 16:05:50.229519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.460 [2024-07-15 16:05:50.240993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.460 [2024-07-15 16:05:50.241021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.460 [2024-07-15 16:05:50.252431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.460 [2024-07-15 16:05:50.252458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.460 [2024-07-15 16:05:50.263863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.460 [2024-07-15 16:05:50.263890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.460 [2024-07-15 16:05:50.277079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.460 [2024-07-15 16:05:50.277108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.460 [2024-07-15 16:05:50.287789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.460 [2024-07-15 16:05:50.287816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.460 [2024-07-15 16:05:50.298553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.460 [2024-07-15 16:05:50.298580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.460 [2024-07-15 16:05:50.311922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.460 [2024-07-15 16:05:50.311973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.460 [2024-07-15 16:05:50.322542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.460 [2024-07-15 16:05:50.322569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.460 [2024-07-15 16:05:50.334247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.460 [2024-07-15 16:05:50.334290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.461 [2024-07-15 16:05:50.345778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.461 [2024-07-15 16:05:50.345805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.461 [2024-07-15 16:05:50.357232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.461 [2024-07-15 16:05:50.357283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.461 [2024-07-15 16:05:50.368850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.461 [2024-07-15 16:05:50.368877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.461 [2024-07-15 16:05:50.380871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.461 [2024-07-15 16:05:50.380898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.461 [2024-07-15 16:05:50.392894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.461 [2024-07-15 16:05:50.392922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.461 [2024-07-15 16:05:50.404884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.461 [2024-07-15 16:05:50.404912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.461 [2024-07-15 16:05:50.416968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.461 [2024-07-15 16:05:50.416995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.461 [2024-07-15 16:05:50.428095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.461 [2024-07-15 16:05:50.428123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.461 [2024-07-15 16:05:50.441437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.461 [2024-07-15 16:05:50.441464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.461 [2024-07-15 16:05:50.452588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.461 [2024-07-15 16:05:50.452615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.463689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.463718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.474866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.474894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.485728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.485756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.496729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.496756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.511942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.511994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.522925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.522975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.534393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.534419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.545850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.545877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.557897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.557924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.568929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.568981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.580268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.580305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.591823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.591850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.602968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.602995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.615784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.615811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.626010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.626037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.637744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.637771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.649003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.649031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.660044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.660071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.670849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.670876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.683630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.683658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.694538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.694566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.705996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.706024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.720 [2024-07-15 16:05:50.717039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.720 [2024-07-15 16:05:50.717067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.728277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.728306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.739737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.739764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.751193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.751221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.762140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.762167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.773655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.773682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.785028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.785056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.796628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.796666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.808099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.808127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.819387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.819414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.831023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.831051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.842743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.842770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.854522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.854549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.865861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.865888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.879355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.879382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.890004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.890032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.901514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.901541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.915131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.915158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.926470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.926496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.937769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.937795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.949528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.949555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.961118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.961146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.979 [2024-07-15 16:05:50.972849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.979 [2024-07-15 16:05:50.972878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:50.984377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:50.984406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:50.995806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:50.995834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.007274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.007306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.019206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.019244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.031000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.031033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.044249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.044277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.053888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.053918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 00:13:05.239 Latency(us) 00:13:05.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.239 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:05.239 Nvme1n1 : 5.01 11133.76 86.98 0.00 0.00 11481.73 2973.39 18252.99 00:13:05.239 =================================================================================================================== 00:13:05.239 Total : 11133.76 86.98 0.00 0.00 11481.73 2973.39 18252.99 00:13:05.239 [2024-07-15 16:05:51.059787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.059811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.067805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.067829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.075824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.075847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.083933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.084010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.091939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.091993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.099983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.100040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.107984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.108030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.116035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.116082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.124029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.124079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.132054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.132100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.140086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.140133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.148097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.148147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.156122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.156171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.164148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.164193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.172168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.172217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.180199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.180250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.188144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.188167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.196170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.196192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.204203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.204226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.212209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.212246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.220284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.220325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.228320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.228369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.239 [2024-07-15 16:05:51.236359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.239 [2024-07-15 16:05:51.236422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.498 [2024-07-15 16:05:51.244325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.498 [2024-07-15 16:05:51.244347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.498 [2024-07-15 16:05:51.252337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.498 [2024-07-15 16:05:51.252357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.498 [2024-07-15 16:05:51.260357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.498 [2024-07-15 16:05:51.260378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.499 [2024-07-15 16:05:51.268378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.499 [2024-07-15 16:05:51.268398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.499 [2024-07-15 16:05:51.276463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.499 [2024-07-15 16:05:51.276511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.499 [2024-07-15 16:05:51.284479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.499 [2024-07-15 16:05:51.284528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.499 [2024-07-15 16:05:51.292458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.499 [2024-07-15 16:05:51.292489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.499 [2024-07-15 16:05:51.300452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.499 [2024-07-15 16:05:51.300472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.499 [2024-07-15 16:05:51.308473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.499 [2024-07-15 16:05:51.308493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (761479) - No such process 00:13:05.499 16:05:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 761479 00:13:05.499 16:05:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.499 16:05:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.499 16:05:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:05.499 16:05:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.499 16:05:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:05.499 16:05:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.499 16:05:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:05.499 delay0 00:13:05.499 16:05:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.499 16:05:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:05.499 16:05:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.499 16:05:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:05.499 16:05:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.499 16:05:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:05.499 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.499 [2024-07-15 16:05:51.426778] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:12.063 Initializing NVMe Controllers 00:13:12.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:12.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:12.063 Initialization complete. Launching workers. 00:13:12.063 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 311 00:13:12.063 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 598, failed to submit 33 00:13:12.063 success 438, unsuccess 160, failed 0 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:12.063 rmmod nvme_tcp 00:13:12.063 rmmod nvme_fabrics 00:13:12.063 rmmod nvme_keyring 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 760146 ']' 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 760146 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 760146 ']' 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 760146 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 760146 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 760146' 00:13:12.063 killing process with pid 760146 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 760146 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 760146 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.063 16:05:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.598 16:05:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:14.598 00:13:14.598 real 0m27.826s 00:13:14.598 user 0m41.306s 00:13:14.598 sys 0m8.170s 00:13:14.598 16:05:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:14.598 16:05:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:14.598 ************************************ 00:13:14.598 END TEST nvmf_zcopy 00:13:14.598 ************************************ 00:13:14.598 16:06:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:14.598 16:06:00 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:14.598 16:06:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:14.598 16:06:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.598 16:06:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:14.598 ************************************ 00:13:14.598 START TEST nvmf_nmic 00:13:14.598 ************************************ 00:13:14.598 16:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:14.598 * Looking for test storage... 00:13:14.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.598 16:06:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.598 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:14.598 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.598 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.598 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:14.599 16:06:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.502 16:06:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:16.502 16:06:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:16.502 16:06:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:16.502 16:06:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:16.502 16:06:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:16.502 16:06:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:16.502 16:06:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:16.502 16:06:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:16.502 16:06:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:16.502 16:06:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:16.502 16:06:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:16.502 16:06:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:16.502 16:06:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:16.502 16:06:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:16.502 16:06:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:16.502 16:06:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.502 16:06:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:16.502 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:16.502 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.502 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:16.503 Found net devices under 0000:09:00.0: cvl_0_0 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:16.503 Found net devices under 0000:09:00.1: cvl_0_1 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:16.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:13:16.503 00:13:16.503 --- 10.0.0.2 ping statistics --- 00:13:16.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.503 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:13:16.503 00:13:16.503 --- 10.0.0.1 ping statistics --- 00:13:16.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.503 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=764862 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 764862 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 764862 ']' 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:16.503 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.503 [2024-07-15 16:06:02.221103] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:13:16.503 [2024-07-15 16:06:02.221198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.503 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.503 [2024-07-15 16:06:02.291155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.503 [2024-07-15 16:06:02.405170] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.503 [2024-07-15 16:06:02.405248] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.503 [2024-07-15 16:06:02.405261] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.503 [2024-07-15 16:06:02.405273] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.503 [2024-07-15 16:06:02.405282] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.503 [2024-07-15 16:06:02.405401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.503 [2024-07-15 16:06:02.405791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.503 [2024-07-15 16:06:02.405856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.503 [2024-07-15 16:06:02.405853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.763 [2024-07-15 16:06:02.569642] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.763 Malloc0 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.763 [2024-07-15 16:06:02.621410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:16.763 test case1: single bdev can't be used in multiple subsystems 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.763 [2024-07-15 16:06:02.645201] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:16.763 [2024-07-15 16:06:02.645231] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:16.763 [2024-07-15 16:06:02.645255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.763 request: 00:13:16.763 { 00:13:16.763 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:16.763 "namespace": { 00:13:16.763 "bdev_name": "Malloc0", 00:13:16.763 "no_auto_visible": false 00:13:16.763 }, 00:13:16.763 "method": "nvmf_subsystem_add_ns", 00:13:16.763 "req_id": 1 00:13:16.763 } 00:13:16.763 Got JSON-RPC error response 00:13:16.763 response: 00:13:16.763 { 00:13:16.763 "code": -32602, 00:13:16.763 "message": "Invalid parameters" 00:13:16.763 } 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:16.763 Adding namespace failed - expected result. 00:13:16.763 16:06:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:16.764 test case2: host connect to nvmf target in multiple paths 00:13:16.764 16:06:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:16.764 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.764 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.764 [2024-07-15 16:06:02.653335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:16.764 16:06:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.764 16:06:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.351 16:06:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:18.296 16:06:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:18.296 16:06:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:18.296 16:06:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.296 16:06:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:18.296 16:06:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:20.200 16:06:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:20.200 16:06:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:20.200 16:06:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:20.200 16:06:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:20.200 16:06:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:20.200 16:06:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:20.200 16:06:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:20.200 [global] 00:13:20.200 thread=1 00:13:20.200 invalidate=1 00:13:20.200 rw=write 00:13:20.200 time_based=1 00:13:20.200 runtime=1 00:13:20.200 ioengine=libaio 00:13:20.200 direct=1 00:13:20.200 bs=4096 00:13:20.200 iodepth=1 00:13:20.200 norandommap=0 00:13:20.200 numjobs=1 00:13:20.200 00:13:20.200 verify_dump=1 00:13:20.200 verify_backlog=512 00:13:20.200 verify_state_save=0 00:13:20.200 do_verify=1 00:13:20.200 verify=crc32c-intel 00:13:20.200 [job0] 00:13:20.200 filename=/dev/nvme0n1 00:13:20.200 Could not set queue depth (nvme0n1) 00:13:20.200 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.200 fio-3.35 00:13:20.200 Starting 1 thread 00:13:21.578 00:13:21.578 job0: (groupid=0, jobs=1): err= 0: pid=765495: Mon Jul 15 16:06:07 2024 00:13:21.578 read: IOPS=20, BW=83.4KiB/s (85.4kB/s)(84.0KiB/1007msec) 00:13:21.578 slat (nsec): min=9750, max=37549, avg=23294.00, stdev=9183.27 00:13:21.578 clat (usec): min=40549, max=42207, avg=41615.87, stdev=532.19 00:13:21.578 lat (usec): min=40558, max=42226, avg=41639.17, stdev=534.23 00:13:21.578 clat percentiles (usec): 00:13:21.578 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:21.578 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:13:21.578 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:21.578 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:21.578 | 99.99th=[42206] 00:13:21.578 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:13:21.578 slat (usec): min=8, max=30390, avg=78.22, stdev=1342.30 00:13:21.578 clat (usec): min=136, max=356, avg=176.07, stdev=19.63 00:13:21.578 lat (usec): min=145, max=30595, avg=254.30, stdev=1343.73 00:13:21.578 clat percentiles (usec): 00:13:21.578 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 159], 00:13:21.578 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 182], 00:13:21.578 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 204], 00:13:21.578 | 99.00th=[ 221], 99.50th=[ 245], 99.90th=[ 359], 99.95th=[ 359], 00:13:21.578 | 99.99th=[ 359] 00:13:21.578 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:21.578 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:21.578 lat (usec) : 250=95.68%, 500=0.38% 00:13:21.578 lat (msec) : 50=3.94% 00:13:21.578 cpu : usr=0.80%, sys=0.99%, ctx=536, majf=0, minf=2 00:13:21.578 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:21.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.578 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.578 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:21.578 00:13:21.578 Run status group 0 (all jobs): 00:13:21.578 READ: bw=83.4KiB/s (85.4kB/s), 83.4KiB/s-83.4KiB/s (85.4kB/s-85.4kB/s), io=84.0KiB (86.0kB), run=1007-1007msec 00:13:21.578 WRITE: bw=2034KiB/s (2083kB/s), 2034KiB/s-2034KiB/s (2083kB/s-2083kB/s), io=2048KiB (2097kB), run=1007-1007msec 00:13:21.578 00:13:21.578 Disk stats (read/write): 00:13:21.578 nvme0n1: ios=43/512, merge=0/0, ticks=1712/75, in_queue=1787, util=98.80% 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:21.578 rmmod nvme_tcp 00:13:21.578 rmmod nvme_fabrics 00:13:21.578 rmmod nvme_keyring 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 764862 ']' 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 764862 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 764862 ']' 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 764862 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 764862 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 764862' 00:13:21.578 killing process with pid 764862 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 764862 00:13:21.578 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 764862 00:13:22.146 16:06:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:22.146 16:06:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:22.146 16:06:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:22.146 16:06:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:22.146 16:06:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:22.146 16:06:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.146 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.146 16:06:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.058 16:06:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:24.058 00:13:24.058 real 0m9.851s 00:13:24.058 user 0m22.446s 00:13:24.058 sys 0m2.215s 00:13:24.058 16:06:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:24.058 16:06:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:24.058 ************************************ 00:13:24.058 END TEST nvmf_nmic 00:13:24.058 ************************************ 00:13:24.058 16:06:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:24.058 16:06:09 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:24.058 16:06:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:24.058 16:06:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.058 16:06:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:24.058 ************************************ 00:13:24.058 START TEST nvmf_fio_target 00:13:24.058 ************************************ 00:13:24.058 16:06:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:24.058 * Looking for test storage... 00:13:24.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.058 16:06:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.058 16:06:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:24.058 16:06:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:24.059 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:24.059 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.059 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:24.059 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:24.059 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:24.059 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.059 16:06:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.059 16:06:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.059 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:24.059 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:24.059 16:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:24.059 16:06:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:26.597 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:26.597 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:26.597 Found net devices under 0000:09:00.0: cvl_0_0 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:26.597 Found net devices under 0000:09:00.1: cvl_0_1 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.597 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:26.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:13:26.598 00:13:26.598 --- 10.0.0.2 ping statistics --- 00:13:26.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.598 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:13:26.598 00:13:26.598 --- 10.0.0.1 ping statistics --- 00:13:26.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.598 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=768067 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 768067 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 768067 ']' 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:26.598 16:06:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.598 [2024-07-15 16:06:12.272115] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:13:26.598 [2024-07-15 16:06:12.272192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.598 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.598 [2024-07-15 16:06:12.348707] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:26.598 [2024-07-15 16:06:12.480713] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.598 [2024-07-15 16:06:12.480790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.598 [2024-07-15 16:06:12.480828] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.598 [2024-07-15 16:06:12.480849] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.598 [2024-07-15 16:06:12.480867] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.598 [2024-07-15 16:06:12.481073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.598 [2024-07-15 16:06:12.481118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.598 [2024-07-15 16:06:12.481182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.598 [2024-07-15 16:06:12.481190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.574 16:06:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:27.574 16:06:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:27.574 16:06:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:27.574 16:06:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:27.574 16:06:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.574 16:06:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.574 16:06:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:27.574 [2024-07-15 16:06:13.536845] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.851 16:06:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:27.851 16:06:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:27.851 16:06:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:28.419 16:06:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:28.419 16:06:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:28.419 16:06:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:28.419 16:06:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:28.676 16:06:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:28.676 16:06:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:28.934 16:06:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:29.191 16:06:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:29.191 16:06:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:29.449 16:06:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:29.449 16:06:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:29.707 16:06:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:29.707 16:06:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:29.965 16:06:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:30.223 16:06:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:30.223 16:06:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:30.481 16:06:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:30.481 16:06:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:30.739 16:06:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.997 [2024-07-15 16:06:16.903029] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.997 16:06:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:31.255 16:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:31.515 16:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:32.084 16:06:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:32.084 16:06:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:32.084 16:06:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:32.084 16:06:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:32.084 16:06:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:32.084 16:06:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:34.623 16:06:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:34.623 16:06:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:34.623 16:06:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.623 16:06:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:34.623 16:06:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.623 16:06:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:34.623 16:06:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:34.623 [global] 00:13:34.623 thread=1 00:13:34.623 invalidate=1 00:13:34.623 rw=write 00:13:34.623 time_based=1 00:13:34.623 runtime=1 00:13:34.623 ioengine=libaio 00:13:34.623 direct=1 00:13:34.623 bs=4096 00:13:34.623 iodepth=1 00:13:34.623 norandommap=0 00:13:34.623 numjobs=1 00:13:34.623 00:13:34.623 verify_dump=1 00:13:34.623 verify_backlog=512 00:13:34.623 verify_state_save=0 00:13:34.623 do_verify=1 00:13:34.623 verify=crc32c-intel 00:13:34.623 [job0] 00:13:34.623 filename=/dev/nvme0n1 00:13:34.623 [job1] 00:13:34.623 filename=/dev/nvme0n2 00:13:34.623 [job2] 00:13:34.623 filename=/dev/nvme0n3 00:13:34.623 [job3] 00:13:34.623 filename=/dev/nvme0n4 00:13:34.623 Could not set queue depth (nvme0n1) 00:13:34.623 Could not set queue depth (nvme0n2) 00:13:34.623 Could not set queue depth (nvme0n3) 00:13:34.623 Could not set queue depth (nvme0n4) 00:13:34.623 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:34.623 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:34.623 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:34.623 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:34.623 fio-3.35 00:13:34.623 Starting 4 threads 00:13:35.555 00:13:35.555 job0: (groupid=0, jobs=1): err= 0: pid=769151: Mon Jul 15 16:06:21 2024 00:13:35.555 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:13:35.555 slat (nsec): min=12607, max=48215, avg=20324.18, stdev=10402.92 00:13:35.555 clat (usec): min=40920, max=41993, avg=41506.91, stdev=498.76 00:13:35.555 lat (usec): min=40956, max=42011, avg=41527.24, stdev=497.61 00:13:35.556 clat percentiles (usec): 00:13:35.556 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:35.556 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:13:35.556 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:35.556 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:35.556 | 99.99th=[42206] 00:13:35.556 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:13:35.556 slat (nsec): min=6431, max=38963, avg=13644.06, stdev=6322.32 00:13:35.556 clat (usec): min=137, max=280, avg=168.03, stdev=15.15 00:13:35.556 lat (usec): min=146, max=303, avg=181.67, stdev=16.82 00:13:35.556 clat percentiles (usec): 00:13:35.556 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:13:35.556 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:13:35.556 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 192], 00:13:35.556 | 99.00th=[ 212], 99.50th=[ 221], 99.90th=[ 281], 99.95th=[ 281], 00:13:35.556 | 99.99th=[ 281] 00:13:35.556 bw ( KiB/s): min= 4096, max= 4096, per=25.90%, avg=4096.00, stdev= 0.00, samples=1 00:13:35.556 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:35.556 lat (usec) : 250=95.69%, 500=0.19% 00:13:35.556 lat (msec) : 50=4.12% 00:13:35.556 cpu : usr=0.50%, sys=0.50%, ctx=535, majf=0, minf=1 00:13:35.556 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.556 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.556 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:35.556 job1: (groupid=0, jobs=1): err= 0: pid=769152: Mon Jul 15 16:06:21 2024 00:13:35.556 read: IOPS=21, BW=84.9KiB/s (87.0kB/s)(88.0KiB/1036msec) 00:13:35.556 slat (nsec): min=12142, max=34270, avg=18976.36, stdev=7477.43 00:13:35.556 clat (usec): min=40863, max=41986, avg=41406.17, stdev=491.42 00:13:35.556 lat (usec): min=40897, max=42004, avg=41425.14, stdev=493.50 00:13:35.556 clat percentiles (usec): 00:13:35.556 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:35.556 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:13:35.556 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:35.556 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:35.556 | 99.99th=[42206] 00:13:35.556 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:13:35.556 slat (nsec): min=7930, max=57250, avg=20128.14, stdev=8025.55 00:13:35.556 clat (usec): min=167, max=391, avg=218.26, stdev=25.40 00:13:35.556 lat (usec): min=179, max=400, avg=238.39, stdev=23.80 00:13:35.556 clat percentiles (usec): 00:13:35.556 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 198], 00:13:35.556 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:13:35.556 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 260], 00:13:35.556 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 392], 99.95th=[ 392], 00:13:35.556 | 99.99th=[ 392] 00:13:35.556 bw ( KiB/s): min= 4096, max= 4096, per=25.90%, avg=4096.00, stdev= 0.00, samples=1 00:13:35.556 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:35.556 lat (usec) : 250=87.08%, 500=8.80% 00:13:35.556 lat (msec) : 50=4.12% 00:13:35.556 cpu : usr=0.68%, sys=1.26%, ctx=535, majf=0, minf=1 00:13:35.556 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.556 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.556 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:35.556 job2: (groupid=0, jobs=1): err= 0: pid=769153: Mon Jul 15 16:06:21 2024 00:13:35.556 read: IOPS=1525, BW=6103KiB/s (6249kB/s)(6164KiB/1010msec) 00:13:35.556 slat (nsec): min=5727, max=51035, avg=13771.47, stdev=5992.62 00:13:35.556 clat (usec): min=195, max=41979, avg=373.69, stdev=2349.98 00:13:35.556 lat (usec): min=203, max=42008, avg=387.46, stdev=2350.39 00:13:35.556 clat percentiles (usec): 00:13:35.556 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 225], 00:13:35.556 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:13:35.556 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 269], 00:13:35.556 | 99.00th=[ 314], 99.50th=[ 343], 99.90th=[41681], 99.95th=[42206], 00:13:35.556 | 99.99th=[42206] 00:13:35.556 write: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec); 0 zone resets 00:13:35.556 slat (nsec): min=6950, max=63666, avg=14932.03, stdev=7177.72 00:13:35.556 clat (usec): min=138, max=286, avg=179.71, stdev=29.67 00:13:35.556 lat (usec): min=145, max=307, avg=194.64, stdev=32.81 00:13:35.556 clat percentiles (usec): 00:13:35.556 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:13:35.556 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 174], 60.00th=[ 180], 00:13:35.556 | 70.00th=[ 188], 80.00th=[ 200], 90.00th=[ 231], 95.00th=[ 245], 00:13:35.556 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 277], 99.95th=[ 285], 00:13:35.556 | 99.99th=[ 285] 00:13:35.556 bw ( KiB/s): min= 7352, max= 9032, per=51.80%, avg=8192.00, stdev=1187.94, samples=2 00:13:35.556 iops : min= 1838, max= 2258, avg=2048.00, stdev=296.98, samples=2 00:13:35.556 lat (usec) : 250=87.88%, 500=11.98% 00:13:35.556 lat (msec) : 50=0.14% 00:13:35.556 cpu : usr=3.57%, sys=6.84%, ctx=3590, majf=0, minf=2 00:13:35.556 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.556 issued rwts: total=1541,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.556 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:35.556 job3: (groupid=0, jobs=1): err= 0: pid=769154: Mon Jul 15 16:06:21 2024 00:13:35.556 read: IOPS=670, BW=2681KiB/s (2746kB/s)(2708KiB/1010msec) 00:13:35.556 slat (nsec): min=4309, max=41303, avg=12760.32, stdev=6974.56 00:13:35.556 clat (usec): min=197, max=42090, avg=1163.59, stdev=6096.25 00:13:35.556 lat (usec): min=202, max=42105, avg=1176.35, stdev=6097.06 00:13:35.556 clat percentiles (usec): 00:13:35.556 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 225], 00:13:35.556 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:13:35.556 | 70.00th=[ 249], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 338], 00:13:35.556 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:35.556 | 99.99th=[42206] 00:13:35.556 write: IOPS=1013, BW=4055KiB/s (4153kB/s)(4096KiB/1010msec); 0 zone resets 00:13:35.556 slat (nsec): min=5896, max=53218, avg=11239.74, stdev=6402.17 00:13:35.556 clat (usec): min=139, max=654, avg=191.90, stdev=41.30 00:13:35.556 lat (usec): min=146, max=662, avg=203.14, stdev=43.55 00:13:35.556 clat percentiles (usec): 00:13:35.556 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:13:35.556 | 30.00th=[ 163], 40.00th=[ 172], 50.00th=[ 186], 60.00th=[ 196], 00:13:35.556 | 70.00th=[ 212], 80.00th=[ 227], 90.00th=[ 239], 95.00th=[ 251], 00:13:35.556 | 99.00th=[ 273], 99.50th=[ 322], 99.90th=[ 652], 99.95th=[ 652], 00:13:35.556 | 99.99th=[ 652] 00:13:35.556 bw ( KiB/s): min= 4096, max= 4096, per=25.90%, avg=4096.00, stdev= 0.00, samples=2 00:13:35.556 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:13:35.556 lat (usec) : 250=85.66%, 500=13.29%, 750=0.18% 00:13:35.556 lat (msec) : 50=0.88% 00:13:35.556 cpu : usr=1.59%, sys=1.49%, ctx=1701, majf=0, minf=1 00:13:35.556 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.556 issued rwts: total=677,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.556 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:35.556 00:13:35.556 Run status group 0 (all jobs): 00:13:35.556 READ: bw=8734KiB/s (8943kB/s), 84.9KiB/s-6103KiB/s (87.0kB/s-6249kB/s), io=9048KiB (9265kB), run=1008-1036msec 00:13:35.556 WRITE: bw=15.4MiB/s (16.2MB/s), 1977KiB/s-8111KiB/s (2024kB/s-8306kB/s), io=16.0MiB (16.8MB), run=1008-1036msec 00:13:35.556 00:13:35.556 Disk stats (read/write): 00:13:35.556 nvme0n1: ios=42/512, merge=0/0, ticks=1618/86, in_queue=1704, util=85.27% 00:13:35.556 nvme0n2: ios=66/512, merge=0/0, ticks=1470/101, in_queue=1571, util=89.42% 00:13:35.556 nvme0n3: ios=1559/2048, merge=0/0, ticks=1269/332, in_queue=1601, util=93.51% 00:13:35.556 nvme0n4: ios=730/1024, merge=0/0, ticks=689/183, in_queue=872, util=95.56% 00:13:35.556 16:06:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:35.556 [global] 00:13:35.556 thread=1 00:13:35.556 invalidate=1 00:13:35.556 rw=randwrite 00:13:35.556 time_based=1 00:13:35.556 runtime=1 00:13:35.556 ioengine=libaio 00:13:35.556 direct=1 00:13:35.556 bs=4096 00:13:35.556 iodepth=1 00:13:35.556 norandommap=0 00:13:35.556 numjobs=1 00:13:35.556 00:13:35.556 verify_dump=1 00:13:35.556 verify_backlog=512 00:13:35.556 verify_state_save=0 00:13:35.556 do_verify=1 00:13:35.556 verify=crc32c-intel 00:13:35.556 [job0] 00:13:35.556 filename=/dev/nvme0n1 00:13:35.556 [job1] 00:13:35.556 filename=/dev/nvme0n2 00:13:35.556 [job2] 00:13:35.556 filename=/dev/nvme0n3 00:13:35.556 [job3] 00:13:35.557 filename=/dev/nvme0n4 00:13:35.557 Could not set queue depth (nvme0n1) 00:13:35.557 Could not set queue depth (nvme0n2) 00:13:35.557 Could not set queue depth (nvme0n3) 00:13:35.557 Could not set queue depth (nvme0n4) 00:13:35.814 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:35.814 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:35.814 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:35.814 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:35.814 fio-3.35 00:13:35.814 Starting 4 threads 00:13:37.199 00:13:37.199 job0: (groupid=0, jobs=1): err= 0: pid=769387: Mon Jul 15 16:06:22 2024 00:13:37.199 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6156KiB/1003msec) 00:13:37.199 slat (nsec): min=4513, max=54456, avg=14192.33, stdev=9772.09 00:13:37.199 clat (usec): min=190, max=41473, avg=378.43, stdev=2085.10 00:13:37.199 lat (usec): min=195, max=41486, avg=392.62, stdev=2085.27 00:13:37.199 clat percentiles (usec): 00:13:37.199 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:13:37.199 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 241], 00:13:37.199 | 70.00th=[ 255], 80.00th=[ 306], 90.00th=[ 453], 95.00th=[ 478], 00:13:37.199 | 99.00th=[ 545], 99.50th=[ 570], 99.90th=[41157], 99.95th=[41681], 00:13:37.199 | 99.99th=[41681] 00:13:37.199 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:13:37.199 slat (nsec): min=5749, max=62867, avg=12562.82, stdev=5542.48 00:13:37.199 clat (usec): min=130, max=588, avg=175.46, stdev=30.89 00:13:37.199 lat (usec): min=137, max=608, avg=188.02, stdev=33.59 00:13:37.199 clat percentiles (usec): 00:13:37.199 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 157], 00:13:37.199 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:13:37.199 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 202], 95.00th=[ 229], 00:13:37.199 | 99.00th=[ 273], 99.50th=[ 343], 99.90th=[ 490], 99.95th=[ 502], 00:13:37.199 | 99.99th=[ 586] 00:13:37.199 bw ( KiB/s): min= 8192, max= 8192, per=34.50%, avg=8192.00, stdev= 0.00, samples=2 00:13:37.199 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:13:37.199 lat (usec) : 250=85.31%, 500=13.97%, 750=0.59%, 1000=0.03% 00:13:37.199 lat (msec) : 50=0.11% 00:13:37.199 cpu : usr=2.79%, sys=4.69%, ctx=3587, majf=0, minf=2 00:13:37.199 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.199 issued rwts: total=1539,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.199 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.199 job1: (groupid=0, jobs=1): err= 0: pid=769388: Mon Jul 15 16:06:22 2024 00:13:37.199 read: IOPS=1091, BW=4367KiB/s (4472kB/s)(4520KiB/1035msec) 00:13:37.199 slat (nsec): min=5908, max=56605, avg=17172.02, stdev=7756.32 00:13:37.199 clat (usec): min=208, max=41013, avg=562.62, stdev=2952.08 00:13:37.199 lat (usec): min=215, max=41026, avg=579.80, stdev=2951.90 00:13:37.199 clat percentiles (usec): 00:13:37.199 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 239], 20.00th=[ 258], 00:13:37.199 | 30.00th=[ 273], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 326], 00:13:37.199 | 70.00th=[ 379], 80.00th=[ 457], 90.00th=[ 510], 95.00th=[ 545], 00:13:37.199 | 99.00th=[ 676], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:13:37.199 | 99.99th=[41157] 00:13:37.199 write: IOPS=1484, BW=5936KiB/s (6079kB/s)(6144KiB/1035msec); 0 zone resets 00:13:37.199 slat (nsec): min=7473, max=61964, avg=17442.41, stdev=8379.58 00:13:37.199 clat (usec): min=140, max=1375, avg=220.50, stdev=69.59 00:13:37.199 lat (usec): min=149, max=1387, avg=237.94, stdev=72.62 00:13:37.199 clat percentiles (usec): 00:13:37.199 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 178], 00:13:37.199 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 202], 60.00th=[ 215], 00:13:37.199 | 70.00th=[ 227], 80.00th=[ 245], 90.00th=[ 310], 95.00th=[ 355], 00:13:37.199 | 99.00th=[ 416], 99.50th=[ 441], 99.90th=[ 848], 99.95th=[ 1369], 00:13:37.199 | 99.99th=[ 1369] 00:13:37.200 bw ( KiB/s): min= 4536, max= 7752, per=25.88%, avg=6144.00, stdev=2274.06, samples=2 00:13:37.200 iops : min= 1134, max= 1938, avg=1536.00, stdev=568.51, samples=2 00:13:37.200 lat (usec) : 250=53.53%, 500=41.37%, 750=4.65%, 1000=0.08% 00:13:37.200 lat (msec) : 2=0.11%, 4=0.04%, 50=0.23% 00:13:37.200 cpu : usr=3.87%, sys=5.51%, ctx=2667, majf=0, minf=1 00:13:37.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.200 issued rwts: total=1130,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.200 job2: (groupid=0, jobs=1): err= 0: pid=769391: Mon Jul 15 16:06:22 2024 00:13:37.200 read: IOPS=1679, BW=6717KiB/s (6878kB/s)(6724KiB/1001msec) 00:13:37.200 slat (nsec): min=5718, max=40712, avg=13589.12, stdev=6767.76 00:13:37.200 clat (usec): min=197, max=605, avg=308.51, stdev=93.71 00:13:37.200 lat (usec): min=203, max=626, avg=322.10, stdev=98.17 00:13:37.200 clat percentiles (usec): 00:13:37.200 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 239], 00:13:37.200 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 269], 60.00th=[ 281], 00:13:37.200 | 70.00th=[ 310], 80.00th=[ 424], 90.00th=[ 478], 95.00th=[ 498], 00:13:37.200 | 99.00th=[ 570], 99.50th=[ 586], 99.90th=[ 603], 99.95th=[ 603], 00:13:37.200 | 99.99th=[ 603] 00:13:37.200 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:37.200 slat (nsec): min=7321, max=79016, avg=16286.21, stdev=8995.60 00:13:37.200 clat (usec): min=134, max=556, avg=199.61, stdev=61.89 00:13:37.200 lat (usec): min=142, max=598, avg=215.90, stdev=67.56 00:13:37.200 clat percentiles (usec): 00:13:37.200 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:13:37.200 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 180], 60.00th=[ 192], 00:13:37.200 | 70.00th=[ 204], 80.00th=[ 223], 90.00th=[ 285], 95.00th=[ 338], 00:13:37.200 | 99.00th=[ 449], 99.50th=[ 474], 99.90th=[ 498], 99.95th=[ 529], 00:13:37.200 | 99.99th=[ 553] 00:13:37.200 bw ( KiB/s): min= 7920, max= 7920, per=33.35%, avg=7920.00, stdev= 0.00, samples=1 00:13:37.200 iops : min= 1980, max= 1980, avg=1980.00, stdev= 0.00, samples=1 00:13:37.200 lat (usec) : 250=63.26%, 500=34.75%, 750=1.98% 00:13:37.200 cpu : usr=4.40%, sys=7.30%, ctx=3731, majf=0, minf=1 00:13:37.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.200 issued rwts: total=1681,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.200 job3: (groupid=0, jobs=1): err= 0: pid=769394: Mon Jul 15 16:06:22 2024 00:13:37.200 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:13:37.200 slat (nsec): min=13202, max=35547, avg=22948.00, stdev=9743.57 00:13:37.200 clat (usec): min=16644, max=42003, avg=40252.99, stdev=5297.43 00:13:37.200 lat (usec): min=16657, max=42018, avg=40275.93, stdev=5299.67 00:13:37.200 clat percentiles (usec): 00:13:37.200 | 1.00th=[16581], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:13:37.200 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:13:37.200 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:37.200 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:37.200 | 99.99th=[42206] 00:13:37.200 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:13:37.200 slat (nsec): min=8024, max=41107, avg=15588.77, stdev=6253.73 00:13:37.200 clat (usec): min=150, max=420, avg=206.25, stdev=36.39 00:13:37.200 lat (usec): min=159, max=454, avg=221.84, stdev=38.21 00:13:37.200 clat percentiles (usec): 00:13:37.200 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 182], 00:13:37.200 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 206], 00:13:37.200 | 70.00th=[ 215], 80.00th=[ 227], 90.00th=[ 247], 95.00th=[ 273], 00:13:37.200 | 99.00th=[ 330], 99.50th=[ 383], 99.90th=[ 420], 99.95th=[ 420], 00:13:37.200 | 99.99th=[ 420] 00:13:37.200 bw ( KiB/s): min= 4096, max= 4096, per=17.25%, avg=4096.00, stdev= 0.00, samples=1 00:13:37.200 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:37.200 lat (usec) : 250=87.27%, 500=8.61% 00:13:37.200 lat (msec) : 20=0.19%, 50=3.93% 00:13:37.200 cpu : usr=0.50%, sys=0.70%, ctx=536, majf=0, minf=1 00:13:37.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.200 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.200 00:13:37.200 Run status group 0 (all jobs): 00:13:37.200 READ: bw=16.5MiB/s (17.3MB/s), 87.7KiB/s-6717KiB/s (89.8kB/s-6878kB/s), io=17.1MiB (17.9MB), run=1001-1035msec 00:13:37.200 WRITE: bw=23.2MiB/s (24.3MB/s), 2042KiB/s-8184KiB/s (2091kB/s-8380kB/s), io=24.0MiB (25.2MB), run=1001-1035msec 00:13:37.200 00:13:37.200 Disk stats (read/write): 00:13:37.200 nvme0n1: ios=1586/1843, merge=0/0, ticks=467/319, in_queue=786, util=86.07% 00:13:37.200 nvme0n2: ios=1078/1536, merge=0/0, ticks=1117/318, in_queue=1435, util=97.15% 00:13:37.200 nvme0n3: ios=1456/1536, merge=0/0, ticks=1048/296, in_queue=1344, util=97.38% 00:13:37.200 nvme0n4: ios=55/512, merge=0/0, ticks=1264/101, in_queue=1365, util=97.56% 00:13:37.200 16:06:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:37.200 [global] 00:13:37.200 thread=1 00:13:37.200 invalidate=1 00:13:37.200 rw=write 00:13:37.200 time_based=1 00:13:37.200 runtime=1 00:13:37.200 ioengine=libaio 00:13:37.200 direct=1 00:13:37.200 bs=4096 00:13:37.200 iodepth=128 00:13:37.200 norandommap=0 00:13:37.200 numjobs=1 00:13:37.200 00:13:37.200 verify_dump=1 00:13:37.200 verify_backlog=512 00:13:37.200 verify_state_save=0 00:13:37.200 do_verify=1 00:13:37.200 verify=crc32c-intel 00:13:37.200 [job0] 00:13:37.200 filename=/dev/nvme0n1 00:13:37.200 [job1] 00:13:37.200 filename=/dev/nvme0n2 00:13:37.200 [job2] 00:13:37.200 filename=/dev/nvme0n3 00:13:37.200 [job3] 00:13:37.200 filename=/dev/nvme0n4 00:13:37.200 Could not set queue depth (nvme0n1) 00:13:37.200 Could not set queue depth (nvme0n2) 00:13:37.200 Could not set queue depth (nvme0n3) 00:13:37.200 Could not set queue depth (nvme0n4) 00:13:37.457 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:37.457 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:37.457 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:37.457 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:37.457 fio-3.35 00:13:37.457 Starting 4 threads 00:13:38.829 00:13:38.829 job0: (groupid=0, jobs=1): err= 0: pid=769734: Mon Jul 15 16:06:24 2024 00:13:38.829 read: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(10.0MiB/1016msec) 00:13:38.829 slat (usec): min=3, max=17660, avg=159.21, stdev=1093.44 00:13:38.829 clat (usec): min=7103, max=38608, avg=19685.78, stdev=6533.94 00:13:38.829 lat (usec): min=7113, max=38615, avg=19844.99, stdev=6593.44 00:13:38.829 clat percentiles (usec): 00:13:38.829 | 1.00th=[ 8455], 5.00th=[10159], 10.00th=[12780], 20.00th=[13698], 00:13:38.829 | 30.00th=[16319], 40.00th=[19268], 50.00th=[19530], 60.00th=[19792], 00:13:38.829 | 70.00th=[20579], 80.00th=[23200], 90.00th=[29754], 95.00th=[33817], 00:13:38.829 | 99.00th=[36439], 99.50th=[36963], 99.90th=[38536], 99.95th=[38536], 00:13:38.829 | 99.99th=[38536] 00:13:38.829 write: IOPS=2949, BW=11.5MiB/s (12.1MB/s)(11.7MiB/1016msec); 0 zone resets 00:13:38.829 slat (usec): min=4, max=16022, avg=181.94, stdev=1001.13 00:13:38.829 clat (usec): min=234, max=123303, avg=26215.38, stdev=20954.95 00:13:38.829 lat (usec): min=672, max=123311, avg=26397.31, stdev=21065.64 00:13:38.829 clat percentiles (usec): 00:13:38.829 | 1.00th=[ 1090], 5.00th=[ 2278], 10.00th=[ 8356], 20.00th=[ 17433], 00:13:38.829 | 30.00th=[ 20317], 40.00th=[ 21103], 50.00th=[ 21627], 60.00th=[ 22676], 00:13:38.829 | 70.00th=[ 23725], 80.00th=[ 27919], 90.00th=[ 47973], 95.00th=[ 73925], 00:13:38.829 | 99.00th=[117965], 99.50th=[117965], 99.90th=[123208], 99.95th=[123208], 00:13:38.829 | 99.99th=[123208] 00:13:38.829 bw ( KiB/s): min=10424, max=12528, per=20.41%, avg=11476.00, stdev=1487.75, samples=2 00:13:38.829 iops : min= 2606, max= 3132, avg=2869.00, stdev=371.94, samples=2 00:13:38.829 lat (usec) : 250=0.02%, 750=0.09%, 1000=0.14% 00:13:38.829 lat (msec) : 2=2.27%, 4=1.03%, 10=5.29%, 20=34.95%, 50=51.25% 00:13:38.829 lat (msec) : 100=3.55%, 250=1.42% 00:13:38.829 cpu : usr=3.45%, sys=5.91%, ctx=346, majf=0, minf=11 00:13:38.829 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:38.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:38.829 issued rwts: total=2560,2997,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.829 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:38.829 job1: (groupid=0, jobs=1): err= 0: pid=769735: Mon Jul 15 16:06:24 2024 00:13:38.829 read: IOPS=3619, BW=14.1MiB/s (14.8MB/s)(14.4MiB/1017msec) 00:13:38.829 slat (usec): min=2, max=7542, avg=93.59, stdev=573.86 00:13:38.829 clat (usec): min=3924, max=26187, avg=12331.26, stdev=2759.34 00:13:38.829 lat (usec): min=3940, max=26190, avg=12424.85, stdev=2785.76 00:13:38.829 clat percentiles (usec): 00:13:38.829 | 1.00th=[ 6652], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10683], 00:13:38.829 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[12125], 00:13:38.829 | 70.00th=[12780], 80.00th=[13566], 90.00th=[15533], 95.00th=[18744], 00:13:38.829 | 99.00th=[22414], 99.50th=[22938], 99.90th=[26084], 99.95th=[26084], 00:13:38.829 | 99.99th=[26084] 00:13:38.829 write: IOPS=4027, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1017msec); 0 zone resets 00:13:38.829 slat (usec): min=3, max=24967, avg=152.43, stdev=1011.46 00:13:38.829 clat (msec): min=3, max=110, avg=20.37, stdev=18.71 00:13:38.829 lat (msec): min=3, max=110, avg=20.52, stdev=18.84 00:13:38.829 clat percentiles (msec): 00:13:38.829 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:13:38.829 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:13:38.829 | 70.00th=[ 21], 80.00th=[ 29], 90.00th=[ 42], 95.00th=[ 58], 00:13:38.829 | 99.00th=[ 107], 99.50th=[ 109], 99.90th=[ 110], 99.95th=[ 110], 00:13:38.829 | 99.99th=[ 110] 00:13:38.829 bw ( KiB/s): min=10296, max=22224, per=28.92%, avg=16260.00, stdev=8434.37, samples=2 00:13:38.829 iops : min= 2574, max= 5556, avg=4065.00, stdev=2108.59, samples=2 00:13:38.829 lat (msec) : 4=0.21%, 10=9.57%, 20=72.65%, 50=13.10%, 100=3.48% 00:13:38.829 lat (msec) : 250=0.99% 00:13:38.829 cpu : usr=2.66%, sys=6.40%, ctx=369, majf=0, minf=15 00:13:38.829 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:38.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:38.829 issued rwts: total=3681,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.829 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:38.829 job2: (groupid=0, jobs=1): err= 0: pid=769736: Mon Jul 15 16:06:24 2024 00:13:38.829 read: IOPS=4017, BW=15.7MiB/s (16.5MB/s)(16.5MiB/1051msec) 00:13:38.829 slat (usec): min=3, max=13233, avg=115.78, stdev=772.44 00:13:38.829 clat (usec): min=4607, max=68819, avg=15355.12, stdev=8961.53 00:13:38.829 lat (usec): min=4614, max=68826, avg=15470.90, stdev=8993.83 00:13:38.829 clat percentiles (usec): 00:13:38.829 | 1.00th=[ 6194], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11600], 00:13:38.829 | 30.00th=[12256], 40.00th=[12649], 50.00th=[12911], 60.00th=[13304], 00:13:38.829 | 70.00th=[14746], 80.00th=[15664], 90.00th=[21365], 95.00th=[26084], 00:13:38.829 | 99.00th=[63701], 99.50th=[65799], 99.90th=[68682], 99.95th=[68682], 00:13:38.829 | 99.99th=[68682] 00:13:38.829 write: IOPS=4384, BW=17.1MiB/s (18.0MB/s)(18.0MiB/1051msec); 0 zone resets 00:13:38.829 slat (usec): min=4, max=13143, avg=98.71, stdev=505.39 00:13:38.829 clat (usec): min=688, max=68830, avg=14812.81, stdev=7071.98 00:13:38.829 lat (usec): min=736, max=68837, avg=14911.52, stdev=7126.16 00:13:38.829 clat percentiles (usec): 00:13:38.829 | 1.00th=[ 2769], 5.00th=[ 5538], 10.00th=[ 7635], 20.00th=[10290], 00:13:38.829 | 30.00th=[11207], 40.00th=[12780], 50.00th=[13566], 60.00th=[14353], 00:13:38.829 | 70.00th=[14746], 80.00th=[19792], 90.00th=[23987], 95.00th=[31327], 00:13:38.829 | 99.00th=[37487], 99.50th=[38011], 99.90th=[41157], 99.95th=[41157], 00:13:38.829 | 99.99th=[68682] 00:13:38.829 bw ( KiB/s): min=16384, max=20464, per=32.77%, avg=18424.00, stdev=2885.00, samples=2 00:13:38.829 iops : min= 4096, max= 5116, avg=4606.00, stdev=721.25, samples=2 00:13:38.829 lat (usec) : 750=0.05%, 1000=0.12% 00:13:38.829 lat (msec) : 2=0.11%, 4=1.23%, 10=9.59%, 20=73.93%, 50=13.53% 00:13:38.829 lat (msec) : 100=1.43% 00:13:38.829 cpu : usr=5.24%, sys=8.86%, ctx=511, majf=0, minf=11 00:13:38.829 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:38.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:38.829 issued rwts: total=4222,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.829 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:38.829 job3: (groupid=0, jobs=1): err= 0: pid=769737: Mon Jul 15 16:06:24 2024 00:13:38.829 read: IOPS=3016, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1013msec) 00:13:38.829 slat (usec): min=3, max=21468, avg=179.09, stdev=1244.24 00:13:38.829 clat (usec): min=5343, max=71409, avg=20535.64, stdev=10828.77 00:13:38.829 lat (usec): min=5350, max=71416, avg=20714.73, stdev=10927.93 00:13:38.829 clat percentiles (usec): 00:13:38.829 | 1.00th=[ 6587], 5.00th=[10945], 10.00th=[12518], 20.00th=[13304], 00:13:38.829 | 30.00th=[13960], 40.00th=[15139], 50.00th=[17433], 60.00th=[19530], 00:13:38.829 | 70.00th=[20579], 80.00th=[25297], 90.00th=[34866], 95.00th=[47449], 00:13:38.829 | 99.00th=[60031], 99.50th=[61604], 99.90th=[71828], 99.95th=[71828], 00:13:38.829 | 99.99th=[71828] 00:13:38.829 write: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec); 0 zone resets 00:13:38.829 slat (usec): min=4, max=24937, avg=139.56, stdev=870.73 00:13:38.829 clat (usec): min=2789, max=71413, avg=21322.78, stdev=10664.66 00:13:38.829 lat (usec): min=2802, max=71425, avg=21462.34, stdev=10728.86 00:13:38.829 clat percentiles (usec): 00:13:38.829 | 1.00th=[ 4883], 5.00th=[ 9372], 10.00th=[12518], 20.00th=[13698], 00:13:38.829 | 30.00th=[15795], 40.00th=[19792], 50.00th=[21103], 60.00th=[21365], 00:13:38.829 | 70.00th=[22676], 80.00th=[24249], 90.00th=[27919], 95.00th=[36439], 00:13:38.829 | 99.00th=[68682], 99.50th=[69731], 99.90th=[69731], 99.95th=[71828], 00:13:38.829 | 99.99th=[71828] 00:13:38.829 bw ( KiB/s): min=12288, max=12288, per=21.86%, avg=12288.00, stdev= 0.00, samples=2 00:13:38.829 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:13:38.829 lat (msec) : 4=0.33%, 10=3.44%, 20=50.95%, 50=41.68%, 100=3.61% 00:13:38.829 cpu : usr=3.85%, sys=6.13%, ctx=344, majf=0, minf=13 00:13:38.829 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:38.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:38.829 issued rwts: total=3056,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.829 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:38.829 00:13:38.829 Run status group 0 (all jobs): 00:13:38.829 READ: bw=50.2MiB/s (52.7MB/s), 9.84MiB/s-15.7MiB/s (10.3MB/s-16.5MB/s), io=52.8MiB (55.4MB), run=1013-1051msec 00:13:38.829 WRITE: bw=54.9MiB/s (57.6MB/s), 11.5MiB/s-17.1MiB/s (12.1MB/s-18.0MB/s), io=57.7MiB (60.5MB), run=1013-1051msec 00:13:38.829 00:13:38.829 Disk stats (read/write): 00:13:38.829 nvme0n1: ios=2098/2415, merge=0/0, ticks=40475/63141, in_queue=103616, util=86.67% 00:13:38.829 nvme0n2: ios=2996/3072, merge=0/0, ticks=21899/38084, in_queue=59983, util=86.69% 00:13:38.830 nvme0n3: ios=3642/3799, merge=0/0, ticks=47869/55452, in_queue=103321, util=97.91% 00:13:38.830 nvme0n4: ios=2582/2791, merge=0/0, ticks=52102/52062, in_queue=104164, util=98.00% 00:13:38.830 16:06:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:38.830 [global] 00:13:38.830 thread=1 00:13:38.830 invalidate=1 00:13:38.830 rw=randwrite 00:13:38.830 time_based=1 00:13:38.830 runtime=1 00:13:38.830 ioengine=libaio 00:13:38.830 direct=1 00:13:38.830 bs=4096 00:13:38.830 iodepth=128 00:13:38.830 norandommap=0 00:13:38.830 numjobs=1 00:13:38.830 00:13:38.830 verify_dump=1 00:13:38.830 verify_backlog=512 00:13:38.830 verify_state_save=0 00:13:38.830 do_verify=1 00:13:38.830 verify=crc32c-intel 00:13:38.830 [job0] 00:13:38.830 filename=/dev/nvme0n1 00:13:38.830 [job1] 00:13:38.830 filename=/dev/nvme0n2 00:13:38.830 [job2] 00:13:38.830 filename=/dev/nvme0n3 00:13:38.830 [job3] 00:13:38.830 filename=/dev/nvme0n4 00:13:38.830 Could not set queue depth (nvme0n1) 00:13:38.830 Could not set queue depth (nvme0n2) 00:13:38.830 Could not set queue depth (nvme0n3) 00:13:38.830 Could not set queue depth (nvme0n4) 00:13:38.830 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:38.830 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:38.830 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:38.830 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:38.830 fio-3.35 00:13:38.830 Starting 4 threads 00:13:40.206 00:13:40.206 job0: (groupid=0, jobs=1): err= 0: pid=769967: Mon Jul 15 16:06:25 2024 00:13:40.206 read: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec) 00:13:40.206 slat (usec): min=3, max=14366, avg=200.80, stdev=1174.24 00:13:40.206 clat (usec): min=9103, max=68366, avg=26279.11, stdev=12804.39 00:13:40.206 lat (usec): min=9115, max=76210, avg=26479.91, stdev=12918.06 00:13:40.206 clat percentiles (usec): 00:13:40.206 | 1.00th=[11207], 5.00th=[13173], 10.00th=[13698], 20.00th=[14353], 00:13:40.206 | 30.00th=[16450], 40.00th=[17171], 50.00th=[22938], 60.00th=[28181], 00:13:40.206 | 70.00th=[33162], 80.00th=[38011], 90.00th=[42206], 95.00th=[52691], 00:13:40.207 | 99.00th=[64750], 99.50th=[66847], 99.90th=[68682], 99.95th=[68682], 00:13:40.207 | 99.99th=[68682] 00:13:40.207 write: IOPS=2197, BW=8790KiB/s (9001kB/s)(8860KiB/1008msec); 0 zone resets 00:13:40.207 slat (usec): min=4, max=15038, avg=255.97, stdev=1250.86 00:13:40.207 clat (msec): min=6, max=107, avg=32.91, stdev=20.69 00:13:40.207 lat (msec): min=8, max=107, avg=33.17, stdev=20.82 00:13:40.207 clat percentiles (msec): 00:13:40.207 | 1.00th=[ 12], 5.00th=[ 17], 10.00th=[ 18], 20.00th=[ 21], 00:13:40.207 | 30.00th=[ 22], 40.00th=[ 23], 50.00th=[ 29], 60.00th=[ 30], 00:13:40.207 | 70.00th=[ 32], 80.00th=[ 36], 90.00th=[ 66], 95.00th=[ 88], 00:13:40.207 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 108], 00:13:40.207 | 99.99th=[ 108] 00:13:40.207 bw ( KiB/s): min= 7008, max= 9696, per=14.38%, avg=8352.00, stdev=1900.70, samples=2 00:13:40.207 iops : min= 1752, max= 2424, avg=2088.00, stdev=475.18, samples=2 00:13:40.207 lat (msec) : 10=0.35%, 20=31.97%, 50=58.22%, 100=8.07%, 250=1.38% 00:13:40.207 cpu : usr=2.58%, sys=4.47%, ctx=286, majf=0, minf=17 00:13:40.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:40.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:40.207 issued rwts: total=2048,2215,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:40.207 job1: (groupid=0, jobs=1): err= 0: pid=769968: Mon Jul 15 16:06:25 2024 00:13:40.207 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:13:40.207 slat (usec): min=2, max=11406, avg=78.79, stdev=700.43 00:13:40.207 clat (usec): min=3382, max=26818, avg=13089.60, stdev=3788.49 00:13:40.207 lat (usec): min=3386, max=26833, avg=13168.38, stdev=3849.56 00:13:40.207 clat percentiles (usec): 00:13:40.207 | 1.00th=[ 5735], 5.00th=[ 7963], 10.00th=[ 9241], 20.00th=[10421], 00:13:40.207 | 30.00th=[10814], 40.00th=[11469], 50.00th=[11994], 60.00th=[13173], 00:13:40.207 | 70.00th=[15008], 80.00th=[15795], 90.00th=[17695], 95.00th=[19530], 00:13:40.207 | 99.00th=[23725], 99.50th=[24249], 99.90th=[25560], 99.95th=[26346], 00:13:40.207 | 99.99th=[26870] 00:13:40.207 write: IOPS=4235, BW=16.5MiB/s (17.3MB/s)(16.7MiB/1012msec); 0 zone resets 00:13:40.207 slat (usec): min=4, max=12543, avg=119.42, stdev=734.03 00:13:40.207 clat (usec): min=1158, max=67139, avg=17422.52, stdev=10876.44 00:13:40.207 lat (usec): min=1163, max=67146, avg=17541.94, stdev=10938.59 00:13:40.207 clat percentiles (usec): 00:13:40.207 | 1.00th=[ 4555], 5.00th=[ 7832], 10.00th=[ 8717], 20.00th=[ 9896], 00:13:40.207 | 30.00th=[11469], 40.00th=[12518], 50.00th=[13566], 60.00th=[15664], 00:13:40.207 | 70.00th=[17695], 80.00th=[21890], 90.00th=[30016], 95.00th=[40633], 00:13:40.207 | 99.00th=[61604], 99.50th=[63701], 99.90th=[65799], 99.95th=[65799], 00:13:40.207 | 99.99th=[67634] 00:13:40.207 bw ( KiB/s): min=15560, max=17712, per=28.65%, avg=16636.00, stdev=1521.69, samples=2 00:13:40.207 iops : min= 3890, max= 4428, avg=4159.00, stdev=380.42, samples=2 00:13:40.207 lat (msec) : 2=0.18%, 4=0.41%, 10=17.85%, 20=65.77%, 50=14.44% 00:13:40.207 lat (msec) : 100=1.36% 00:13:40.207 cpu : usr=2.97%, sys=5.24%, ctx=342, majf=0, minf=11 00:13:40.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:40.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:40.207 issued rwts: total=4096,4286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:40.207 job2: (groupid=0, jobs=1): err= 0: pid=769969: Mon Jul 15 16:06:25 2024 00:13:40.207 read: IOPS=4695, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1002msec) 00:13:40.207 slat (usec): min=2, max=8937, avg=94.70, stdev=560.69 00:13:40.207 clat (usec): min=537, max=57771, avg=12202.81, stdev=3572.45 00:13:40.207 lat (usec): min=2500, max=63407, avg=12297.51, stdev=3596.21 00:13:40.207 clat percentiles (usec): 00:13:40.207 | 1.00th=[ 4948], 5.00th=[ 8455], 10.00th=[ 9896], 20.00th=[10683], 00:13:40.207 | 30.00th=[11076], 40.00th=[11469], 50.00th=[12256], 60.00th=[12649], 00:13:40.207 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14484], 95.00th=[15139], 00:13:40.207 | 99.00th=[21103], 99.50th=[23462], 99.90th=[57934], 99.95th=[57934], 00:13:40.207 | 99.99th=[57934] 00:13:40.207 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:13:40.207 slat (usec): min=3, max=10848, avg=99.37, stdev=645.44 00:13:40.207 clat (usec): min=1437, max=49917, avg=13607.87, stdev=6146.90 00:13:40.207 lat (usec): min=1448, max=50661, avg=13707.23, stdev=6185.74 00:13:40.207 clat percentiles (usec): 00:13:40.207 | 1.00th=[ 4621], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[10814], 00:13:40.207 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:13:40.207 | 70.00th=[12911], 80.00th=[13829], 90.00th=[19268], 95.00th=[27919], 00:13:40.207 | 99.00th=[42206], 99.50th=[45351], 99.90th=[50070], 99.95th=[50070], 00:13:40.207 | 99.99th=[50070] 00:13:40.207 bw ( KiB/s): min=20232, max=20480, per=35.05%, avg=20356.00, stdev=175.36, samples=2 00:13:40.207 iops : min= 5058, max= 5120, avg=5089.00, stdev=43.84, samples=2 00:13:40.207 lat (usec) : 750=0.01% 00:13:40.207 lat (msec) : 2=0.05%, 4=0.40%, 10=10.16%, 20=83.75%, 50=5.47% 00:13:40.207 lat (msec) : 100=0.17% 00:13:40.207 cpu : usr=4.10%, sys=6.19%, ctx=377, majf=0, minf=11 00:13:40.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:40.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:40.207 issued rwts: total=4705,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:40.207 job3: (groupid=0, jobs=1): err= 0: pid=769970: Mon Jul 15 16:06:25 2024 00:13:40.207 read: IOPS=2619, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1002msec) 00:13:40.207 slat (usec): min=2, max=46883, avg=209.83, stdev=1778.59 00:13:40.207 clat (usec): min=350, max=124512, avg=25592.33, stdev=24958.24 00:13:40.207 lat (msec): min=6, max=124, avg=25.80, stdev=25.11 00:13:40.207 clat percentiles (msec): 00:13:40.207 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:13:40.207 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 16], 00:13:40.207 | 70.00th=[ 19], 80.00th=[ 29], 90.00th=[ 72], 95.00th=[ 83], 00:13:40.207 | 99.00th=[ 125], 99.50th=[ 125], 99.90th=[ 125], 99.95th=[ 125], 00:13:40.207 | 99.99th=[ 125] 00:13:40.207 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:13:40.207 slat (usec): min=3, max=15681, avg=138.66, stdev=873.81 00:13:40.207 clat (usec): min=6635, max=76735, avg=19352.30, stdev=12236.73 00:13:40.207 lat (usec): min=6662, max=76741, avg=19490.96, stdev=12295.73 00:13:40.207 clat percentiles (usec): 00:13:40.207 | 1.00th=[ 8979], 5.00th=[11207], 10.00th=[11469], 20.00th=[11863], 00:13:40.207 | 30.00th=[12125], 40.00th=[12387], 50.00th=[13566], 60.00th=[14615], 00:13:40.207 | 70.00th=[20055], 80.00th=[26870], 90.00th=[33162], 95.00th=[49546], 00:13:40.207 | 99.00th=[65274], 99.50th=[73925], 99.90th=[77071], 99.95th=[77071], 00:13:40.207 | 99.99th=[77071] 00:13:40.207 bw ( KiB/s): min= 7688, max=16384, per=20.72%, avg=12036.00, stdev=6149.00, samples=2 00:13:40.207 iops : min= 1922, max= 4096, avg=3009.00, stdev=1537.25, samples=2 00:13:40.207 lat (usec) : 500=0.02% 00:13:40.207 lat (msec) : 10=4.46%, 20=65.60%, 50=20.99%, 100=8.39%, 250=0.54% 00:13:40.207 cpu : usr=2.60%, sys=4.90%, ctx=258, majf=0, minf=13 00:13:40.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:40.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:40.207 issued rwts: total=2625,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:40.207 00:13:40.207 Run status group 0 (all jobs): 00:13:40.207 READ: bw=52.0MiB/s (54.5MB/s), 8127KiB/s-18.3MiB/s (8322kB/s-19.2MB/s), io=52.6MiB (55.2MB), run=1002-1012msec 00:13:40.207 WRITE: bw=56.7MiB/s (59.5MB/s), 8790KiB/s-20.0MiB/s (9001kB/s-20.9MB/s), io=57.4MiB (60.2MB), run=1002-1012msec 00:13:40.207 00:13:40.207 Disk stats (read/write): 00:13:40.207 nvme0n1: ios=1574/1863, merge=0/0, ticks=21645/31600, in_queue=53245, util=97.39% 00:13:40.207 nvme0n2: ios=3622/3735, merge=0/0, ticks=48152/56178, in_queue=104330, util=97.56% 00:13:40.207 nvme0n3: ios=4152/4478, merge=0/0, ticks=27088/33463, in_queue=60551, util=90.30% 00:13:40.207 nvme0n4: ios=2453/2560, merge=0/0, ticks=23087/16084, in_queue=39171, util=97.90% 00:13:40.207 16:06:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:40.207 16:06:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=770107 00:13:40.207 16:06:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:40.207 16:06:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:40.207 [global] 00:13:40.207 thread=1 00:13:40.207 invalidate=1 00:13:40.207 rw=read 00:13:40.207 time_based=1 00:13:40.207 runtime=10 00:13:40.207 ioengine=libaio 00:13:40.207 direct=1 00:13:40.207 bs=4096 00:13:40.207 iodepth=1 00:13:40.207 norandommap=1 00:13:40.207 numjobs=1 00:13:40.207 00:13:40.207 [job0] 00:13:40.207 filename=/dev/nvme0n1 00:13:40.207 [job1] 00:13:40.207 filename=/dev/nvme0n2 00:13:40.207 [job2] 00:13:40.207 filename=/dev/nvme0n3 00:13:40.207 [job3] 00:13:40.207 filename=/dev/nvme0n4 00:13:40.207 Could not set queue depth (nvme0n1) 00:13:40.207 Could not set queue depth (nvme0n2) 00:13:40.207 Could not set queue depth (nvme0n3) 00:13:40.207 Could not set queue depth (nvme0n4) 00:13:40.207 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.207 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.207 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.207 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.207 fio-3.35 00:13:40.207 Starting 4 threads 00:13:43.489 16:06:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:43.489 16:06:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:43.489 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3158016, buflen=4096 00:13:43.489 fio: pid=770204, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:43.489 16:06:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:43.489 16:06:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:43.748 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=450560, buflen=4096 00:13:43.748 fio: pid=770203, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:43.748 16:06:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:43.748 16:06:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:44.007 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=15015936, buflen=4096 00:13:44.007 fio: pid=770200, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:44.007 16:06:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:44.007 16:06:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:44.266 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=389120, buflen=4096 00:13:44.266 fio: pid=770202, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:44.266 00:13:44.266 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=770200: Mon Jul 15 16:06:30 2024 00:13:44.266 read: IOPS=1061, BW=4247KiB/s (4349kB/s)(14.3MiB/3453msec) 00:13:44.266 slat (usec): min=4, max=20863, avg=17.65, stdev=395.21 00:13:44.266 clat (usec): min=163, max=48207, avg=916.02, stdev=5249.68 00:13:44.266 lat (usec): min=170, max=61987, avg=930.47, stdev=5306.00 00:13:44.266 clat percentiles (usec): 00:13:44.266 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:13:44.266 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 215], 60.00th=[ 227], 00:13:44.266 | 70.00th=[ 253], 80.00th=[ 273], 90.00th=[ 302], 95.00th=[ 367], 00:13:44.266 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:13:44.266 | 99.99th=[47973] 00:13:44.266 bw ( KiB/s): min= 96, max=14696, per=97.63%, avg=4872.00, stdev=6684.08, samples=6 00:13:44.266 iops : min= 24, max= 3674, avg=1218.00, stdev=1671.02, samples=6 00:13:44.266 lat (usec) : 250=69.08%, 500=28.42%, 750=0.76% 00:13:44.266 lat (msec) : 10=0.05%, 50=1.66% 00:13:44.266 cpu : usr=0.52%, sys=0.93%, ctx=3670, majf=0, minf=1 00:13:44.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:44.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.266 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.266 issued rwts: total=3667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:44.266 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=770202: Mon Jul 15 16:06:30 2024 00:13:44.266 read: IOPS=25, BW=102KiB/s (105kB/s)(380KiB/3721msec) 00:13:44.266 slat (usec): min=5, max=7913, avg=132.50, stdev=855.48 00:13:44.266 clat (usec): min=203, max=42024, avg=38914.17, stdev=9186.55 00:13:44.266 lat (usec): min=211, max=49012, avg=39047.75, stdev=9255.81 00:13:44.266 clat percentiles (usec): 00:13:44.266 | 1.00th=[ 204], 5.00th=[ 510], 10.00th=[40633], 20.00th=[41157], 00:13:44.266 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:44.266 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:13:44.266 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:44.266 | 99.99th=[42206] 00:13:44.266 bw ( KiB/s): min= 96, max= 128, per=2.04%, avg=102.43, stdev=11.72, samples=7 00:13:44.266 iops : min= 24, max= 32, avg=25.57, stdev= 2.94, samples=7 00:13:44.266 lat (usec) : 250=3.12%, 500=1.04%, 750=1.04% 00:13:44.266 lat (msec) : 50=93.75% 00:13:44.266 cpu : usr=0.08%, sys=0.00%, ctx=98, majf=0, minf=1 00:13:44.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:44.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.266 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.266 issued rwts: total=96,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:44.266 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=770203: Mon Jul 15 16:06:30 2024 00:13:44.266 read: IOPS=34, BW=137KiB/s (140kB/s)(440KiB/3208msec) 00:13:44.266 slat (usec): min=12, max=11883, avg=131.61, stdev=1125.64 00:13:44.266 clat (usec): min=276, max=42044, avg=28824.20, stdev=18726.50 00:13:44.266 lat (usec): min=312, max=52926, avg=28956.87, stdev=18826.13 00:13:44.266 clat percentiles (usec): 00:13:44.266 | 1.00th=[ 281], 5.00th=[ 302], 10.00th=[ 343], 20.00th=[ 367], 00:13:44.266 | 30.00th=[ 494], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:13:44.266 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:13:44.266 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:44.266 | 99.99th=[42206] 00:13:44.266 bw ( KiB/s): min= 96, max= 256, per=2.81%, avg=140.00, stdev=64.15, samples=6 00:13:44.266 iops : min= 24, max= 64, avg=35.00, stdev=16.04, samples=6 00:13:44.266 lat (usec) : 500=29.73% 00:13:44.266 lat (msec) : 50=69.37% 00:13:44.266 cpu : usr=0.12%, sys=0.06%, ctx=112, majf=0, minf=1 00:13:44.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:44.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.266 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.266 issued rwts: total=111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:44.266 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=770204: Mon Jul 15 16:06:30 2024 00:13:44.266 read: IOPS=262, BW=1048KiB/s (1073kB/s)(3084KiB/2943msec) 00:13:44.266 slat (nsec): min=6429, max=50114, avg=13146.56, stdev=8159.46 00:13:44.266 clat (usec): min=198, max=42092, avg=3772.41, stdev=11444.05 00:13:44.266 lat (usec): min=207, max=42110, avg=3785.55, stdev=11446.86 00:13:44.266 clat percentiles (usec): 00:13:44.266 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 215], 00:13:44.266 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 265], 00:13:44.266 | 70.00th=[ 318], 80.00th=[ 379], 90.00th=[ 519], 95.00th=[41157], 00:13:44.266 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:44.266 | 99.99th=[42206] 00:13:44.266 bw ( KiB/s): min= 96, max= 4984, per=24.37%, avg=1216.00, stdev=2128.82, samples=5 00:13:44.266 iops : min= 24, max= 1246, avg=304.00, stdev=532.20, samples=5 00:13:44.266 lat (usec) : 250=57.64%, 500=30.18%, 750=3.50% 00:13:44.266 lat (msec) : 50=8.55% 00:13:44.266 cpu : usr=0.07%, sys=0.48%, ctx=772, majf=0, minf=1 00:13:44.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:44.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.266 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.266 issued rwts: total=772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:44.266 00:13:44.266 Run status group 0 (all jobs): 00:13:44.266 READ: bw=4990KiB/s (5110kB/s), 102KiB/s-4247KiB/s (105kB/s-4349kB/s), io=18.1MiB (19.0MB), run=2943-3721msec 00:13:44.266 00:13:44.266 Disk stats (read/write): 00:13:44.266 nvme0n1: ios=3664/0, merge=0/0, ticks=3258/0, in_queue=3258, util=95.19% 00:13:44.266 nvme0n2: ios=92/0, merge=0/0, ticks=3574/0, in_queue=3574, util=96.19% 00:13:44.266 nvme0n3: ios=107/0, merge=0/0, ticks=3049/0, in_queue=3049, util=96.38% 00:13:44.266 nvme0n4: ios=769/0, merge=0/0, ticks=2824/0, in_queue=2824, util=96.71% 00:13:44.525 16:06:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:44.525 16:06:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:44.784 16:06:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:44.784 16:06:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:45.041 16:06:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:45.041 16:06:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:45.300 16:06:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:45.300 16:06:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:45.558 16:06:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:45.558 16:06:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 770107 00:13:45.558 16:06:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:45.558 16:06:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:45.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.558 16:06:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:45.558 16:06:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:45.558 16:06:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:45.558 16:06:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.558 16:06:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:45.558 16:06:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.558 16:06:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:45.558 16:06:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:45.558 16:06:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:45.558 nvmf hotplug test: fio failed as expected 00:13:45.558 16:06:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:45.816 rmmod nvme_tcp 00:13:45.816 rmmod nvme_fabrics 00:13:45.816 rmmod nvme_keyring 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 768067 ']' 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 768067 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 768067 ']' 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 768067 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 768067 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 768067' 00:13:45.816 killing process with pid 768067 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 768067 00:13:45.816 16:06:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 768067 00:13:46.079 16:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:46.079 16:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:46.079 16:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:46.079 16:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:46.079 16:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:46.079 16:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.079 16:06:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.079 16:06:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.643 16:06:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:48.643 00:13:48.643 real 0m24.150s 00:13:48.643 user 1m25.058s 00:13:48.643 sys 0m6.308s 00:13:48.643 16:06:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:48.643 16:06:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.643 ************************************ 00:13:48.643 END TEST nvmf_fio_target 00:13:48.643 ************************************ 00:13:48.643 16:06:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:48.643 16:06:34 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:48.643 16:06:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:48.643 16:06:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.643 16:06:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:48.643 ************************************ 00:13:48.643 START TEST nvmf_bdevio 00:13:48.643 ************************************ 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:48.643 * Looking for test storage... 00:13:48.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.643 16:06:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:48.644 16:06:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:50.546 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:50.546 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:50.546 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:50.547 Found net devices under 0000:09:00.0: cvl_0_0 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:50.547 Found net devices under 0000:09:00.1: cvl_0_1 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:50.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:13:50.547 00:13:50.547 --- 10.0.0.2 ping statistics --- 00:13:50.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.547 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:13:50.547 00:13:50.547 --- 10.0.0.1 ping statistics --- 00:13:50.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.547 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=772825 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 772825 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 772825 ']' 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.547 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:50.547 [2024-07-15 16:06:36.528747] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:13:50.547 [2024-07-15 16:06:36.528850] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.806 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.806 [2024-07-15 16:06:36.599274] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:50.806 [2024-07-15 16:06:36.709378] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.806 [2024-07-15 16:06:36.709448] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.806 [2024-07-15 16:06:36.709480] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.806 [2024-07-15 16:06:36.709492] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.806 [2024-07-15 16:06:36.709502] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.806 [2024-07-15 16:06:36.709635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:50.806 [2024-07-15 16:06:36.711975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:50.806 [2024-07-15 16:06:36.712048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:50.806 [2024-07-15 16:06:36.712053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:51.067 [2024-07-15 16:06:36.871811] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:51.067 Malloc0 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:51.067 [2024-07-15 16:06:36.925391] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:51.067 { 00:13:51.067 "params": { 00:13:51.067 "name": "Nvme$subsystem", 00:13:51.067 "trtype": "$TEST_TRANSPORT", 00:13:51.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:51.067 "adrfam": "ipv4", 00:13:51.067 "trsvcid": "$NVMF_PORT", 00:13:51.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:51.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:51.067 "hdgst": ${hdgst:-false}, 00:13:51.067 "ddgst": ${ddgst:-false} 00:13:51.067 }, 00:13:51.067 "method": "bdev_nvme_attach_controller" 00:13:51.067 } 00:13:51.067 EOF 00:13:51.067 )") 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:13:51.067 16:06:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:51.067 "params": { 00:13:51.067 "name": "Nvme1", 00:13:51.067 "trtype": "tcp", 00:13:51.067 "traddr": "10.0.0.2", 00:13:51.067 "adrfam": "ipv4", 00:13:51.067 "trsvcid": "4420", 00:13:51.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:51.067 "hdgst": false, 00:13:51.067 "ddgst": false 00:13:51.067 }, 00:13:51.067 "method": "bdev_nvme_attach_controller" 00:13:51.067 }' 00:13:51.067 [2024-07-15 16:06:36.973417] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:13:51.067 [2024-07-15 16:06:36.973481] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772970 ] 00:13:51.067 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.067 [2024-07-15 16:06:37.034309] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:51.325 [2024-07-15 16:06:37.151001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.325 [2024-07-15 16:06:37.151028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.325 [2024-07-15 16:06:37.151032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.582 I/O targets: 00:13:51.583 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:51.583 00:13:51.583 00:13:51.583 CUnit - A unit testing framework for C - Version 2.1-3 00:13:51.583 http://cunit.sourceforge.net/ 00:13:51.583 00:13:51.583 00:13:51.583 Suite: bdevio tests on: Nvme1n1 00:13:51.583 Test: blockdev write read block ...passed 00:13:51.583 Test: blockdev write zeroes read block ...passed 00:13:51.583 Test: blockdev write zeroes read no split ...passed 00:13:51.583 Test: blockdev write zeroes read split ...passed 00:13:51.583 Test: blockdev write zeroes read split partial ...passed 00:13:51.583 Test: blockdev reset ...[2024-07-15 16:06:37.571793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:51.583 [2024-07-15 16:06:37.571896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f5580 (9): Bad file descriptor 00:13:51.841 [2024-07-15 16:06:37.667514] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:51.841 passed 00:13:51.841 Test: blockdev write read 8 blocks ...passed 00:13:51.841 Test: blockdev write read size > 128k ...passed 00:13:51.841 Test: blockdev write read invalid size ...passed 00:13:51.841 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:51.841 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:51.841 Test: blockdev write read max offset ...passed 00:13:51.841 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:51.841 Test: blockdev writev readv 8 blocks ...passed 00:13:51.841 Test: blockdev writev readv 30 x 1block ...passed 00:13:52.101 Test: blockdev writev readv block ...passed 00:13:52.101 Test: blockdev writev readv size > 128k ...passed 00:13:52.101 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:52.101 Test: blockdev comparev and writev ...[2024-07-15 16:06:37.879033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:52.101 [2024-07-15 16:06:37.879068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:52.101 [2024-07-15 16:06:37.879093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:52.101 [2024-07-15 16:06:37.879109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:52.101 [2024-07-15 16:06:37.879415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:52.101 [2024-07-15 16:06:37.879439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:52.101 [2024-07-15 16:06:37.879460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:52.101 [2024-07-15 16:06:37.879476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:52.101 [2024-07-15 16:06:37.879765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:52.101 [2024-07-15 16:06:37.879788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:52.101 [2024-07-15 16:06:37.879809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:52.101 [2024-07-15 16:06:37.879825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:52.101 [2024-07-15 16:06:37.880141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:52.101 [2024-07-15 16:06:37.880165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:52.101 [2024-07-15 16:06:37.880186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:52.101 [2024-07-15 16:06:37.880203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:52.101 passed 00:13:52.101 Test: blockdev nvme passthru rw ...passed 00:13:52.101 Test: blockdev nvme passthru vendor specific ...[2024-07-15 16:06:37.962184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:52.101 [2024-07-15 16:06:37.962210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:52.101 [2024-07-15 16:06:37.962351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:52.101 [2024-07-15 16:06:37.962374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:52.101 [2024-07-15 16:06:37.962516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:52.101 [2024-07-15 16:06:37.962546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:52.101 [2024-07-15 16:06:37.962692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:52.101 [2024-07-15 16:06:37.962715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:52.101 passed 00:13:52.101 Test: blockdev nvme admin passthru ...passed 00:13:52.101 Test: blockdev copy ...passed 00:13:52.101 00:13:52.101 Run Summary: Type Total Ran Passed Failed Inactive 00:13:52.101 suites 1 1 n/a 0 0 00:13:52.101 tests 23 23 23 0 0 00:13:52.101 asserts 152 152 152 0 n/a 00:13:52.101 00:13:52.101 Elapsed time = 1.272 seconds 00:13:52.360 16:06:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:52.361 rmmod nvme_tcp 00:13:52.361 rmmod nvme_fabrics 00:13:52.361 rmmod nvme_keyring 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 772825 ']' 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 772825 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 772825 ']' 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 772825 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 772825 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 772825' 00:13:52.361 killing process with pid 772825 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 772825 00:13:52.361 16:06:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 772825 00:13:52.928 16:06:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:52.928 16:06:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:52.928 16:06:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:52.928 16:06:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:52.928 16:06:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:52.928 16:06:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.928 16:06:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:52.928 16:06:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.830 16:06:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:54.830 00:13:54.830 real 0m6.527s 00:13:54.830 user 0m10.674s 00:13:54.830 sys 0m2.130s 00:13:54.830 16:06:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:54.830 16:06:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:54.830 ************************************ 00:13:54.830 END TEST nvmf_bdevio 00:13:54.830 ************************************ 00:13:54.830 16:06:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:54.830 16:06:40 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:54.830 16:06:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:54.830 16:06:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:54.830 16:06:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:54.830 ************************************ 00:13:54.830 START TEST nvmf_auth_target 00:13:54.830 ************************************ 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:54.830 * Looking for test storage... 00:13:54.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:54.830 16:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:56.736 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:56.736 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:56.736 Found net devices under 0000:09:00.0: cvl_0_0 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:56.736 Found net devices under 0000:09:00.1: cvl_0_1 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.736 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:56.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:13:56.995 00:13:56.995 --- 10.0.0.2 ping statistics --- 00:13:56.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.995 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:13:56.995 00:13:56.995 --- 10.0.0.1 ping statistics --- 00:13:56.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.995 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=775038 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 775038 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 775038 ']' 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:56.995 16:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=775057 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3dc86820adb9e00706d9843fc8b55cdfc5f2d23242370888 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.4Cq 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3dc86820adb9e00706d9843fc8b55cdfc5f2d23242370888 0 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3dc86820adb9e00706d9843fc8b55cdfc5f2d23242370888 0 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3dc86820adb9e00706d9843fc8b55cdfc5f2d23242370888 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.4Cq 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.4Cq 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.4Cq 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a4b6e3aa4884ba15c0a471deded3319081d39e54de7c7c0d1b73741a29e8b4c1 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zQY 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a4b6e3aa4884ba15c0a471deded3319081d39e54de7c7c0d1b73741a29e8b4c1 3 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a4b6e3aa4884ba15c0a471deded3319081d39e54de7c7c0d1b73741a29e8b4c1 3 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a4b6e3aa4884ba15c0a471deded3319081d39e54de7c7c0d1b73741a29e8b4c1 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zQY 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zQY 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.zQY 00:13:57.254 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:57.255 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:57.255 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:57.255 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:57.255 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:57.255 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:57.255 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:57.255 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4f6fdbdf5de3e3fa9a2cd355492dd420 00:13:57.255 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:57.255 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.5Jg 00:13:57.255 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4f6fdbdf5de3e3fa9a2cd355492dd420 1 00:13:57.255 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4f6fdbdf5de3e3fa9a2cd355492dd420 1 00:13:57.255 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:57.255 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:57.255 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4f6fdbdf5de3e3fa9a2cd355492dd420 00:13:57.255 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:57.255 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.5Jg 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.5Jg 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.5Jg 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=369e0cb953079ea35ee27ad9b1af4cb1c965db0cb59c1d29 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Hjs 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 369e0cb953079ea35ee27ad9b1af4cb1c965db0cb59c1d29 2 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 369e0cb953079ea35ee27ad9b1af4cb1c965db0cb59c1d29 2 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=369e0cb953079ea35ee27ad9b1af4cb1c965db0cb59c1d29 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Hjs 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Hjs 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Hjs 00:13:57.513 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=999ead04720e08419405e082109200c5278fef2cb1be7ef5 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.uPX 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 999ead04720e08419405e082109200c5278fef2cb1be7ef5 2 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 999ead04720e08419405e082109200c5278fef2cb1be7ef5 2 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=999ead04720e08419405e082109200c5278fef2cb1be7ef5 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.uPX 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.uPX 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.uPX 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3326928d5f261bdbea3b71112ca5f3b5 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.nq6 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3326928d5f261bdbea3b71112ca5f3b5 1 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3326928d5f261bdbea3b71112ca5f3b5 1 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3326928d5f261bdbea3b71112ca5f3b5 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.nq6 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.nq6 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.nq6 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=76aa1761ced575914b4a25ca64c71c66c1b472beb9c72758799276c3d19d77c6 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.oKy 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 76aa1761ced575914b4a25ca64c71c66c1b472beb9c72758799276c3d19d77c6 3 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 76aa1761ced575914b4a25ca64c71c66c1b472beb9c72758799276c3d19d77c6 3 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=76aa1761ced575914b4a25ca64c71c66c1b472beb9c72758799276c3d19d77c6 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.oKy 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.oKy 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.oKy 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 775038 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 775038 ']' 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:57.514 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.772 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:57.772 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:57.772 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 775057 /var/tmp/host.sock 00:13:57.772 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 775057 ']' 00:13:57.772 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:57.772 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:57.772 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:57.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:57.772 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:57.772 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.031 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:58.031 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:58.031 16:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:58.031 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.031 16:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.031 16:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.031 16:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:58.031 16:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4Cq 00:13:58.031 16:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.031 16:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.031 16:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.031 16:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.4Cq 00:13:58.031 16:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.4Cq 00:13:58.289 16:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.zQY ]] 00:13:58.289 16:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zQY 00:13:58.289 16:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.289 16:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.289 16:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.289 16:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zQY 00:13:58.289 16:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zQY 00:13:58.548 16:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:58.548 16:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5Jg 00:13:58.548 16:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.548 16:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.548 16:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.548 16:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.5Jg 00:13:58.548 16:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.5Jg 00:13:58.806 16:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Hjs ]] 00:13:58.806 16:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Hjs 00:13:58.806 16:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.806 16:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.806 16:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.806 16:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Hjs 00:13:58.806 16:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Hjs 00:13:59.064 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:59.064 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.uPX 00:13:59.064 16:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.064 16:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.064 16:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.064 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.uPX 00:13:59.064 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.uPX 00:13:59.321 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.nq6 ]] 00:13:59.321 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nq6 00:13:59.321 16:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.321 16:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.321 16:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.321 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nq6 00:13:59.321 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nq6 00:13:59.578 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:59.578 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.oKy 00:13:59.578 16:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.578 16:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.578 16:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.579 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.oKy 00:13:59.579 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.oKy 00:13:59.837 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:59.837 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:59.837 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:59.837 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:59.837 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:59.837 16:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:00.094 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:00.094 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:00.094 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:00.094 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:00.094 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:00.094 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.094 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:00.094 16:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.094 16:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.094 16:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.094 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:00.095 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:00.658 00:14:00.658 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:00.658 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.658 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:00.658 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.658 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.658 16:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.658 16:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.658 16:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.658 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:00.658 { 00:14:00.658 "cntlid": 1, 00:14:00.658 "qid": 0, 00:14:00.658 "state": "enabled", 00:14:00.658 "thread": "nvmf_tgt_poll_group_000", 00:14:00.658 "listen_address": { 00:14:00.658 "trtype": "TCP", 00:14:00.658 "adrfam": "IPv4", 00:14:00.658 "traddr": "10.0.0.2", 00:14:00.658 "trsvcid": "4420" 00:14:00.658 }, 00:14:00.658 "peer_address": { 00:14:00.658 "trtype": "TCP", 00:14:00.658 "adrfam": "IPv4", 00:14:00.658 "traddr": "10.0.0.1", 00:14:00.658 "trsvcid": "32972" 00:14:00.658 }, 00:14:00.658 "auth": { 00:14:00.658 "state": "completed", 00:14:00.658 "digest": "sha256", 00:14:00.658 "dhgroup": "null" 00:14:00.658 } 00:14:00.658 } 00:14:00.658 ]' 00:14:00.658 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:00.915 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:00.915 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:00.915 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:00.915 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:00.915 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.915 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.915 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.174 16:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:14:02.108 16:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.108 16:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:02.108 16:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.108 16:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.108 16:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.108 16:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:02.108 16:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:02.108 16:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:02.366 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:02.366 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:02.366 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:02.366 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:02.366 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:02.366 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.366 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:02.366 16:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.366 16:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.366 16:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.366 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:02.366 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:02.624 00:14:02.624 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:02.624 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.624 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.881 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.881 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.881 16:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.881 16:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.881 16:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.881 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.881 { 00:14:02.881 "cntlid": 3, 00:14:02.881 "qid": 0, 00:14:02.881 "state": "enabled", 00:14:02.881 "thread": "nvmf_tgt_poll_group_000", 00:14:02.881 "listen_address": { 00:14:02.881 "trtype": "TCP", 00:14:02.881 "adrfam": "IPv4", 00:14:02.881 "traddr": "10.0.0.2", 00:14:02.881 "trsvcid": "4420" 00:14:02.881 }, 00:14:02.881 "peer_address": { 00:14:02.881 "trtype": "TCP", 00:14:02.881 "adrfam": "IPv4", 00:14:02.881 "traddr": "10.0.0.1", 00:14:02.881 "trsvcid": "33012" 00:14:02.881 }, 00:14:02.881 "auth": { 00:14:02.881 "state": "completed", 00:14:02.881 "digest": "sha256", 00:14:02.881 "dhgroup": "null" 00:14:02.881 } 00:14:02.881 } 00:14:02.881 ]' 00:14:02.881 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.881 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:02.881 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.881 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:02.881 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.881 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.881 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.881 16:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.139 16:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:14:04.075 16:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.075 16:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:04.075 16:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.075 16:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.075 16:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.075 16:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:04.075 16:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:04.076 16:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:04.363 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:04.363 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:04.363 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:04.363 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:04.363 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:04.363 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.363 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:04.363 16:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.363 16:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.363 16:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.363 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:04.363 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:04.621 00:14:04.621 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:04.621 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:04.621 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.878 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.878 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.878 16:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.878 16:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.878 16:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.878 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.878 { 00:14:04.878 "cntlid": 5, 00:14:04.878 "qid": 0, 00:14:04.878 "state": "enabled", 00:14:04.878 "thread": "nvmf_tgt_poll_group_000", 00:14:04.878 "listen_address": { 00:14:04.878 "trtype": "TCP", 00:14:04.878 "adrfam": "IPv4", 00:14:04.878 "traddr": "10.0.0.2", 00:14:04.878 "trsvcid": "4420" 00:14:04.879 }, 00:14:04.879 "peer_address": { 00:14:04.879 "trtype": "TCP", 00:14:04.879 "adrfam": "IPv4", 00:14:04.879 "traddr": "10.0.0.1", 00:14:04.879 "trsvcid": "33046" 00:14:04.879 }, 00:14:04.879 "auth": { 00:14:04.879 "state": "completed", 00:14:04.879 "digest": "sha256", 00:14:04.879 "dhgroup": "null" 00:14:04.879 } 00:14:04.879 } 00:14:04.879 ]' 00:14:04.879 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:04.879 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:04.879 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:04.879 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:04.879 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:05.138 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.138 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.138 16:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.397 16:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:06.334 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:06.902 00:14:06.902 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:06.902 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:06.902 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.902 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.902 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.902 16:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.902 16:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.902 16:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.902 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:06.902 { 00:14:06.902 "cntlid": 7, 00:14:06.902 "qid": 0, 00:14:06.902 "state": "enabled", 00:14:06.902 "thread": "nvmf_tgt_poll_group_000", 00:14:06.902 "listen_address": { 00:14:06.902 "trtype": "TCP", 00:14:06.902 "adrfam": "IPv4", 00:14:06.902 "traddr": "10.0.0.2", 00:14:06.902 "trsvcid": "4420" 00:14:06.902 }, 00:14:06.902 "peer_address": { 00:14:06.902 "trtype": "TCP", 00:14:06.902 "adrfam": "IPv4", 00:14:06.902 "traddr": "10.0.0.1", 00:14:06.902 "trsvcid": "33068" 00:14:06.902 }, 00:14:06.902 "auth": { 00:14:06.902 "state": "completed", 00:14:06.902 "digest": "sha256", 00:14:06.902 "dhgroup": "null" 00:14:06.902 } 00:14:06.902 } 00:14:06.902 ]' 00:14:06.902 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.160 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.160 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.160 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:07.160 16:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.160 16:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.160 16:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.160 16:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.431 16:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:14:08.371 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.371 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:08.371 16:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.371 16:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.371 16:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.371 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:08.371 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.371 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:08.371 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:08.629 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:08.629 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.629 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:08.629 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:08.629 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:08.629 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.629 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:08.629 16:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.629 16:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.629 16:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.629 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:08.629 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:08.886 00:14:08.886 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:08.886 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:08.886 16:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.144 16:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.145 16:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.145 16:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.145 16:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.145 16:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.145 16:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:09.145 { 00:14:09.145 "cntlid": 9, 00:14:09.145 "qid": 0, 00:14:09.145 "state": "enabled", 00:14:09.145 "thread": "nvmf_tgt_poll_group_000", 00:14:09.145 "listen_address": { 00:14:09.145 "trtype": "TCP", 00:14:09.145 "adrfam": "IPv4", 00:14:09.145 "traddr": "10.0.0.2", 00:14:09.145 "trsvcid": "4420" 00:14:09.145 }, 00:14:09.145 "peer_address": { 00:14:09.145 "trtype": "TCP", 00:14:09.145 "adrfam": "IPv4", 00:14:09.145 "traddr": "10.0.0.1", 00:14:09.145 "trsvcid": "33094" 00:14:09.145 }, 00:14:09.145 "auth": { 00:14:09.145 "state": "completed", 00:14:09.145 "digest": "sha256", 00:14:09.145 "dhgroup": "ffdhe2048" 00:14:09.145 } 00:14:09.145 } 00:14:09.145 ]' 00:14:09.145 16:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.145 16:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:09.145 16:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.145 16:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:09.145 16:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.403 16:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.403 16:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.403 16:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.662 16:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.601 16:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.168 00:14:11.168 16:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.168 16:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:11.168 16:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.427 16:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.427 16:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.427 16:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.427 16:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.427 16:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.427 16:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.427 { 00:14:11.427 "cntlid": 11, 00:14:11.427 "qid": 0, 00:14:11.427 "state": "enabled", 00:14:11.427 "thread": "nvmf_tgt_poll_group_000", 00:14:11.427 "listen_address": { 00:14:11.427 "trtype": "TCP", 00:14:11.427 "adrfam": "IPv4", 00:14:11.427 "traddr": "10.0.0.2", 00:14:11.427 "trsvcid": "4420" 00:14:11.427 }, 00:14:11.427 "peer_address": { 00:14:11.427 "trtype": "TCP", 00:14:11.427 "adrfam": "IPv4", 00:14:11.427 "traddr": "10.0.0.1", 00:14:11.427 "trsvcid": "46082" 00:14:11.427 }, 00:14:11.427 "auth": { 00:14:11.427 "state": "completed", 00:14:11.427 "digest": "sha256", 00:14:11.427 "dhgroup": "ffdhe2048" 00:14:11.427 } 00:14:11.427 } 00:14:11.427 ]' 00:14:11.427 16:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.427 16:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.427 16:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.427 16:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:11.427 16:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.427 16:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.427 16:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.427 16:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.684 16:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:14:12.618 16:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.618 16:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:12.618 16:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.618 16:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.618 16:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.618 16:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:12.618 16:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:12.618 16:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:12.876 16:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:12.876 16:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.876 16:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:12.876 16:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:12.876 16:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:12.876 16:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.876 16:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.876 16:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.876 16:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.876 16:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.876 16:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.876 16:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.134 00:14:13.134 16:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.134 16:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.134 16:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.390 16:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.390 16:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.390 16:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.390 16:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.390 16:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.390 16:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:13.390 { 00:14:13.390 "cntlid": 13, 00:14:13.390 "qid": 0, 00:14:13.390 "state": "enabled", 00:14:13.390 "thread": "nvmf_tgt_poll_group_000", 00:14:13.390 "listen_address": { 00:14:13.390 "trtype": "TCP", 00:14:13.390 "adrfam": "IPv4", 00:14:13.390 "traddr": "10.0.0.2", 00:14:13.390 "trsvcid": "4420" 00:14:13.390 }, 00:14:13.390 "peer_address": { 00:14:13.390 "trtype": "TCP", 00:14:13.390 "adrfam": "IPv4", 00:14:13.390 "traddr": "10.0.0.1", 00:14:13.390 "trsvcid": "46124" 00:14:13.390 }, 00:14:13.390 "auth": { 00:14:13.390 "state": "completed", 00:14:13.390 "digest": "sha256", 00:14:13.390 "dhgroup": "ffdhe2048" 00:14:13.390 } 00:14:13.390 } 00:14:13.390 ]' 00:14:13.390 16:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:13.390 16:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.390 16:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:13.647 16:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:13.647 16:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:13.647 16:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.647 16:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.647 16:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.904 16:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:14:14.862 16:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.862 16:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:14.862 16:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.862 16:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.862 16:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.862 16:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.862 16:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:14.862 16:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:15.120 16:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:15.120 16:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.120 16:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:15.120 16:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:15.120 16:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:15.120 16:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.120 16:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:15.120 16:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.120 16:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.120 16:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.120 16:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:15.120 16:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:15.378 00:14:15.378 16:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.378 16:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.378 16:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.636 16:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.636 16:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.636 16:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.636 16:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.636 16:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.636 16:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:15.636 { 00:14:15.636 "cntlid": 15, 00:14:15.636 "qid": 0, 00:14:15.636 "state": "enabled", 00:14:15.636 "thread": "nvmf_tgt_poll_group_000", 00:14:15.636 "listen_address": { 00:14:15.636 "trtype": "TCP", 00:14:15.636 "adrfam": "IPv4", 00:14:15.636 "traddr": "10.0.0.2", 00:14:15.636 "trsvcid": "4420" 00:14:15.636 }, 00:14:15.636 "peer_address": { 00:14:15.636 "trtype": "TCP", 00:14:15.636 "adrfam": "IPv4", 00:14:15.636 "traddr": "10.0.0.1", 00:14:15.636 "trsvcid": "46144" 00:14:15.636 }, 00:14:15.636 "auth": { 00:14:15.636 "state": "completed", 00:14:15.636 "digest": "sha256", 00:14:15.636 "dhgroup": "ffdhe2048" 00:14:15.636 } 00:14:15.636 } 00:14:15.636 ]' 00:14:15.636 16:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.636 16:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.636 16:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.636 16:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:15.636 16:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.636 16:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.636 16:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.636 16:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.894 16:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:14:16.832 16:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.832 16:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:16.832 16:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.832 16:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.832 16:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.832 16:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:16.832 16:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:16.832 16:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:16.832 16:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:17.090 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:17.090 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.090 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:17.090 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:17.090 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:17.090 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.090 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.090 16:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.090 16:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.090 16:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.090 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.090 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.674 00:14:17.674 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.674 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.674 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.932 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.932 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.932 16:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.932 16:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.932 16:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.932 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.932 { 00:14:17.932 "cntlid": 17, 00:14:17.932 "qid": 0, 00:14:17.932 "state": "enabled", 00:14:17.932 "thread": "nvmf_tgt_poll_group_000", 00:14:17.932 "listen_address": { 00:14:17.932 "trtype": "TCP", 00:14:17.932 "adrfam": "IPv4", 00:14:17.932 "traddr": "10.0.0.2", 00:14:17.932 "trsvcid": "4420" 00:14:17.932 }, 00:14:17.932 "peer_address": { 00:14:17.932 "trtype": "TCP", 00:14:17.932 "adrfam": "IPv4", 00:14:17.932 "traddr": "10.0.0.1", 00:14:17.932 "trsvcid": "46170" 00:14:17.932 }, 00:14:17.932 "auth": { 00:14:17.932 "state": "completed", 00:14:17.932 "digest": "sha256", 00:14:17.932 "dhgroup": "ffdhe3072" 00:14:17.932 } 00:14:17.932 } 00:14:17.932 ]' 00:14:17.932 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.932 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.932 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.932 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:17.932 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:17.932 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.932 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.932 16:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.190 16:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:14:19.163 16:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.163 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:19.163 16:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.163 16:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.163 16:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.163 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.163 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:19.163 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:19.421 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:19.421 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.421 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:19.421 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:19.421 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:19.421 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.421 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.421 16:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.421 16:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.421 16:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.421 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.421 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.679 00:14:19.937 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:19.937 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:19.937 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.937 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.937 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.937 16:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.937 16:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.195 16:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.195 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.195 { 00:14:20.195 "cntlid": 19, 00:14:20.195 "qid": 0, 00:14:20.195 "state": "enabled", 00:14:20.195 "thread": "nvmf_tgt_poll_group_000", 00:14:20.195 "listen_address": { 00:14:20.195 "trtype": "TCP", 00:14:20.195 "adrfam": "IPv4", 00:14:20.195 "traddr": "10.0.0.2", 00:14:20.195 "trsvcid": "4420" 00:14:20.195 }, 00:14:20.195 "peer_address": { 00:14:20.195 "trtype": "TCP", 00:14:20.195 "adrfam": "IPv4", 00:14:20.195 "traddr": "10.0.0.1", 00:14:20.195 "trsvcid": "47410" 00:14:20.195 }, 00:14:20.195 "auth": { 00:14:20.195 "state": "completed", 00:14:20.195 "digest": "sha256", 00:14:20.195 "dhgroup": "ffdhe3072" 00:14:20.195 } 00:14:20.195 } 00:14:20.195 ]' 00:14:20.195 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.195 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.195 16:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.195 16:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:20.195 16:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.195 16:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.195 16:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.195 16:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.452 16:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:14:21.390 16:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.390 16:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:21.390 16:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.390 16:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.390 16:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.390 16:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.390 16:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:21.390 16:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:21.648 16:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:21.648 16:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.648 16:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:21.648 16:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:21.648 16:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:21.648 16:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.648 16:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.648 16:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.648 16:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.648 16:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.648 16:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.648 16:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.905 00:14:21.905 16:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:21.905 16:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:21.905 16:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.163 16:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.163 16:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.163 16:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.163 16:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.163 16:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.163 16:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.163 { 00:14:22.163 "cntlid": 21, 00:14:22.163 "qid": 0, 00:14:22.163 "state": "enabled", 00:14:22.163 "thread": "nvmf_tgt_poll_group_000", 00:14:22.163 "listen_address": { 00:14:22.163 "trtype": "TCP", 00:14:22.163 "adrfam": "IPv4", 00:14:22.163 "traddr": "10.0.0.2", 00:14:22.163 "trsvcid": "4420" 00:14:22.163 }, 00:14:22.163 "peer_address": { 00:14:22.163 "trtype": "TCP", 00:14:22.163 "adrfam": "IPv4", 00:14:22.163 "traddr": "10.0.0.1", 00:14:22.163 "trsvcid": "47438" 00:14:22.163 }, 00:14:22.163 "auth": { 00:14:22.163 "state": "completed", 00:14:22.163 "digest": "sha256", 00:14:22.163 "dhgroup": "ffdhe3072" 00:14:22.163 } 00:14:22.163 } 00:14:22.163 ]' 00:14:22.163 16:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.421 16:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.421 16:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.421 16:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:22.421 16:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.421 16:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.421 16:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.421 16:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.678 16:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:14:23.614 16:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.614 16:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:23.614 16:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.614 16:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.614 16:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.614 16:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.614 16:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:23.614 16:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:23.872 16:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:23.872 16:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.872 16:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:23.872 16:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:23.872 16:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:23.872 16:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.872 16:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:23.872 16:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.872 16:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.872 16:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.872 16:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:23.872 16:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:24.130 00:14:24.130 16:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.130 16:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.130 16:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.388 16:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.388 16:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.388 16:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.388 16:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.388 16:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.388 16:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.388 { 00:14:24.388 "cntlid": 23, 00:14:24.388 "qid": 0, 00:14:24.388 "state": "enabled", 00:14:24.388 "thread": "nvmf_tgt_poll_group_000", 00:14:24.388 "listen_address": { 00:14:24.388 "trtype": "TCP", 00:14:24.388 "adrfam": "IPv4", 00:14:24.388 "traddr": "10.0.0.2", 00:14:24.388 "trsvcid": "4420" 00:14:24.388 }, 00:14:24.388 "peer_address": { 00:14:24.388 "trtype": "TCP", 00:14:24.388 "adrfam": "IPv4", 00:14:24.388 "traddr": "10.0.0.1", 00:14:24.388 "trsvcid": "47476" 00:14:24.388 }, 00:14:24.388 "auth": { 00:14:24.388 "state": "completed", 00:14:24.388 "digest": "sha256", 00:14:24.388 "dhgroup": "ffdhe3072" 00:14:24.388 } 00:14:24.388 } 00:14:24.388 ]' 00:14:24.388 16:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.388 16:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.388 16:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.646 16:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:24.646 16:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.646 16:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.646 16:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.646 16:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.904 16:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:14:25.840 16:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.840 16:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:25.840 16:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.840 16:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.840 16:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.840 16:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:25.840 16:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.840 16:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:25.840 16:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:26.099 16:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:26.099 16:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.099 16:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:26.099 16:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:26.099 16:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:26.099 16:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.099 16:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.099 16:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.099 16:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.099 16:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.099 16:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.099 16:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.356 00:14:26.356 16:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.356 16:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.356 16:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.613 16:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.613 16:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.613 16:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.613 16:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.613 16:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.613 16:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.613 { 00:14:26.613 "cntlid": 25, 00:14:26.613 "qid": 0, 00:14:26.613 "state": "enabled", 00:14:26.613 "thread": "nvmf_tgt_poll_group_000", 00:14:26.613 "listen_address": { 00:14:26.613 "trtype": "TCP", 00:14:26.613 "adrfam": "IPv4", 00:14:26.613 "traddr": "10.0.0.2", 00:14:26.613 "trsvcid": "4420" 00:14:26.613 }, 00:14:26.613 "peer_address": { 00:14:26.613 "trtype": "TCP", 00:14:26.613 "adrfam": "IPv4", 00:14:26.613 "traddr": "10.0.0.1", 00:14:26.613 "trsvcid": "47500" 00:14:26.613 }, 00:14:26.614 "auth": { 00:14:26.614 "state": "completed", 00:14:26.614 "digest": "sha256", 00:14:26.614 "dhgroup": "ffdhe4096" 00:14:26.614 } 00:14:26.614 } 00:14:26.614 ]' 00:14:26.614 16:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.614 16:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.614 16:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.614 16:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:26.614 16:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.614 16:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.614 16:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.614 16:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.872 16:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:14:27.808 16:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.808 16:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:27.808 16:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.808 16:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.808 16:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.808 16:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:27.808 16:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:27.808 16:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:28.065 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:28.065 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.065 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:28.065 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:28.065 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:28.065 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.065 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.065 16:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.065 16:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.065 16:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.065 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.065 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.633 00:14:28.633 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.633 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.633 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.890 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.890 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.890 16:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.890 16:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.890 16:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.890 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.890 { 00:14:28.890 "cntlid": 27, 00:14:28.890 "qid": 0, 00:14:28.890 "state": "enabled", 00:14:28.890 "thread": "nvmf_tgt_poll_group_000", 00:14:28.890 "listen_address": { 00:14:28.890 "trtype": "TCP", 00:14:28.890 "adrfam": "IPv4", 00:14:28.890 "traddr": "10.0.0.2", 00:14:28.890 "trsvcid": "4420" 00:14:28.890 }, 00:14:28.890 "peer_address": { 00:14:28.891 "trtype": "TCP", 00:14:28.891 "adrfam": "IPv4", 00:14:28.891 "traddr": "10.0.0.1", 00:14:28.891 "trsvcid": "47528" 00:14:28.891 }, 00:14:28.891 "auth": { 00:14:28.891 "state": "completed", 00:14:28.891 "digest": "sha256", 00:14:28.891 "dhgroup": "ffdhe4096" 00:14:28.891 } 00:14:28.891 } 00:14:28.891 ]' 00:14:28.891 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.891 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.891 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.891 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:28.891 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.891 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.891 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.891 16:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.149 16:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:14:30.082 16:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.082 16:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:30.082 16:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.082 16:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.082 16:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.082 16:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.082 16:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:30.082 16:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:30.339 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:30.339 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.339 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:30.339 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:30.339 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:30.339 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.339 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.339 16:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.339 16:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.339 16:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.339 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.339 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.597 00:14:30.597 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.597 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.597 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.854 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.854 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.854 16:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.854 16:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.854 16:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.854 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.854 { 00:14:30.854 "cntlid": 29, 00:14:30.854 "qid": 0, 00:14:30.854 "state": "enabled", 00:14:30.854 "thread": "nvmf_tgt_poll_group_000", 00:14:30.854 "listen_address": { 00:14:30.855 "trtype": "TCP", 00:14:30.855 "adrfam": "IPv4", 00:14:30.855 "traddr": "10.0.0.2", 00:14:30.855 "trsvcid": "4420" 00:14:30.855 }, 00:14:30.855 "peer_address": { 00:14:30.855 "trtype": "TCP", 00:14:30.855 "adrfam": "IPv4", 00:14:30.855 "traddr": "10.0.0.1", 00:14:30.855 "trsvcid": "57294" 00:14:30.855 }, 00:14:30.855 "auth": { 00:14:30.855 "state": "completed", 00:14:30.855 "digest": "sha256", 00:14:30.855 "dhgroup": "ffdhe4096" 00:14:30.855 } 00:14:30.855 } 00:14:30.855 ]' 00:14:30.855 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:31.112 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.112 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:31.112 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:31.112 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.112 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.112 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.112 16:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.370 16:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:14:32.298 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.298 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:32.298 16:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.298 16:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.298 16:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.298 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:32.298 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:32.298 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:32.555 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:32.555 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.555 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:32.555 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:32.555 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:32.555 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.555 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:32.555 16:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.555 16:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.555 16:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.555 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.555 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.813 00:14:32.813 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.813 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.813 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.069 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.069 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.070 16:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.070 16:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.070 16:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.070 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:33.070 { 00:14:33.070 "cntlid": 31, 00:14:33.070 "qid": 0, 00:14:33.070 "state": "enabled", 00:14:33.070 "thread": "nvmf_tgt_poll_group_000", 00:14:33.070 "listen_address": { 00:14:33.070 "trtype": "TCP", 00:14:33.070 "adrfam": "IPv4", 00:14:33.070 "traddr": "10.0.0.2", 00:14:33.070 "trsvcid": "4420" 00:14:33.070 }, 00:14:33.070 "peer_address": { 00:14:33.070 "trtype": "TCP", 00:14:33.070 "adrfam": "IPv4", 00:14:33.070 "traddr": "10.0.0.1", 00:14:33.070 "trsvcid": "57318" 00:14:33.070 }, 00:14:33.070 "auth": { 00:14:33.070 "state": "completed", 00:14:33.070 "digest": "sha256", 00:14:33.070 "dhgroup": "ffdhe4096" 00:14:33.070 } 00:14:33.070 } 00:14:33.070 ]' 00:14:33.070 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.070 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.070 16:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.070 16:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:33.070 16:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.327 16:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.327 16:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.327 16:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.585 16:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.550 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.117 00:14:35.117 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.117 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.117 16:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.375 16:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.375 16:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.375 16:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.375 16:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.375 16:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.375 16:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.375 { 00:14:35.375 "cntlid": 33, 00:14:35.375 "qid": 0, 00:14:35.375 "state": "enabled", 00:14:35.375 "thread": "nvmf_tgt_poll_group_000", 00:14:35.375 "listen_address": { 00:14:35.375 "trtype": "TCP", 00:14:35.375 "adrfam": "IPv4", 00:14:35.375 "traddr": "10.0.0.2", 00:14:35.375 "trsvcid": "4420" 00:14:35.375 }, 00:14:35.375 "peer_address": { 00:14:35.375 "trtype": "TCP", 00:14:35.375 "adrfam": "IPv4", 00:14:35.375 "traddr": "10.0.0.1", 00:14:35.375 "trsvcid": "57346" 00:14:35.375 }, 00:14:35.375 "auth": { 00:14:35.375 "state": "completed", 00:14:35.375 "digest": "sha256", 00:14:35.375 "dhgroup": "ffdhe6144" 00:14:35.375 } 00:14:35.375 } 00:14:35.375 ]' 00:14:35.375 16:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.375 16:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.375 16:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.375 16:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:35.375 16:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.375 16:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.375 16:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.375 16:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.634 16:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:14:36.570 16:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.570 16:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:36.570 16:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.570 16:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.571 16:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.571 16:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.571 16:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:36.571 16:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:37.140 16:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:37.140 16:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.140 16:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:37.140 16:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:37.140 16:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:37.140 16:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.140 16:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.140 16:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.140 16:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.140 16:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.140 16:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.140 16:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.707 00:14:37.707 16:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.707 16:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.707 16:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.707 16:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.707 16:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.707 16:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.707 16:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.707 16:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.707 16:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.707 { 00:14:37.707 "cntlid": 35, 00:14:37.707 "qid": 0, 00:14:37.707 "state": "enabled", 00:14:37.707 "thread": "nvmf_tgt_poll_group_000", 00:14:37.707 "listen_address": { 00:14:37.707 "trtype": "TCP", 00:14:37.707 "adrfam": "IPv4", 00:14:37.707 "traddr": "10.0.0.2", 00:14:37.707 "trsvcid": "4420" 00:14:37.707 }, 00:14:37.707 "peer_address": { 00:14:37.707 "trtype": "TCP", 00:14:37.707 "adrfam": "IPv4", 00:14:37.707 "traddr": "10.0.0.1", 00:14:37.707 "trsvcid": "57370" 00:14:37.707 }, 00:14:37.707 "auth": { 00:14:37.707 "state": "completed", 00:14:37.707 "digest": "sha256", 00:14:37.707 "dhgroup": "ffdhe6144" 00:14:37.707 } 00:14:37.707 } 00:14:37.707 ]' 00:14:37.707 16:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.965 16:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.965 16:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.965 16:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:37.965 16:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.965 16:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.965 16:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.965 16:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.224 16:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:14:39.161 16:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.161 16:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:39.161 16:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.161 16:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.162 16:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.162 16:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.162 16:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:39.162 16:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:39.420 16:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:39.420 16:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.420 16:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:39.420 16:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:39.420 16:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:39.420 16:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.420 16:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.420 16:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.420 16:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.420 16:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.420 16:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.420 16:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.988 00:14:39.988 16:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.988 16:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.988 16:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.988 16:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.988 16:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.988 16:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.988 16:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.988 16:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.988 16:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.988 { 00:14:39.988 "cntlid": 37, 00:14:39.988 "qid": 0, 00:14:39.988 "state": "enabled", 00:14:39.988 "thread": "nvmf_tgt_poll_group_000", 00:14:39.988 "listen_address": { 00:14:39.988 "trtype": "TCP", 00:14:39.988 "adrfam": "IPv4", 00:14:39.988 "traddr": "10.0.0.2", 00:14:39.988 "trsvcid": "4420" 00:14:39.988 }, 00:14:39.988 "peer_address": { 00:14:39.988 "trtype": "TCP", 00:14:39.988 "adrfam": "IPv4", 00:14:39.988 "traddr": "10.0.0.1", 00:14:39.988 "trsvcid": "36574" 00:14:39.988 }, 00:14:39.988 "auth": { 00:14:39.988 "state": "completed", 00:14:39.988 "digest": "sha256", 00:14:39.988 "dhgroup": "ffdhe6144" 00:14:39.988 } 00:14:39.988 } 00:14:39.988 ]' 00:14:39.988 16:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.246 16:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.246 16:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.246 16:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:40.246 16:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.246 16:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.246 16:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.246 16:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.503 16:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:14:41.439 16:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.439 16:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:41.439 16:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.439 16:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.439 16:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.439 16:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.439 16:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:41.439 16:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:41.697 16:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:41.697 16:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.697 16:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:41.697 16:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:41.697 16:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:41.697 16:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.697 16:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:41.697 16:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.697 16:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.697 16:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.697 16:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:41.697 16:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:42.263 00:14:42.264 16:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.264 16:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.264 16:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.521 16:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.521 16:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.522 16:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.522 16:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.522 16:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.522 16:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.522 { 00:14:42.522 "cntlid": 39, 00:14:42.522 "qid": 0, 00:14:42.522 "state": "enabled", 00:14:42.522 "thread": "nvmf_tgt_poll_group_000", 00:14:42.522 "listen_address": { 00:14:42.522 "trtype": "TCP", 00:14:42.522 "adrfam": "IPv4", 00:14:42.522 "traddr": "10.0.0.2", 00:14:42.522 "trsvcid": "4420" 00:14:42.522 }, 00:14:42.522 "peer_address": { 00:14:42.522 "trtype": "TCP", 00:14:42.522 "adrfam": "IPv4", 00:14:42.522 "traddr": "10.0.0.1", 00:14:42.522 "trsvcid": "36616" 00:14:42.522 }, 00:14:42.522 "auth": { 00:14:42.522 "state": "completed", 00:14:42.522 "digest": "sha256", 00:14:42.522 "dhgroup": "ffdhe6144" 00:14:42.522 } 00:14:42.522 } 00:14:42.522 ]' 00:14:42.522 16:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.522 16:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.522 16:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.522 16:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:42.522 16:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.781 16:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.781 16:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.781 16:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.781 16:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:14:43.714 16:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.714 16:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:43.714 16:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.714 16:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.714 16:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.714 16:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:43.714 16:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.714 16:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:43.714 16:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:43.974 16:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:43.974 16:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.974 16:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:43.974 16:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:43.974 16:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:43.974 16:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.974 16:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.974 16:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.974 16:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.233 16:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.233 16:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.233 16:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.800 00:14:45.058 16:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.058 16:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.058 16:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.316 16:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.316 16:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.316 16:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.316 16:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.316 16:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.316 16:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.316 { 00:14:45.316 "cntlid": 41, 00:14:45.316 "qid": 0, 00:14:45.316 "state": "enabled", 00:14:45.316 "thread": "nvmf_tgt_poll_group_000", 00:14:45.316 "listen_address": { 00:14:45.316 "trtype": "TCP", 00:14:45.316 "adrfam": "IPv4", 00:14:45.316 "traddr": "10.0.0.2", 00:14:45.316 "trsvcid": "4420" 00:14:45.316 }, 00:14:45.316 "peer_address": { 00:14:45.316 "trtype": "TCP", 00:14:45.316 "adrfam": "IPv4", 00:14:45.316 "traddr": "10.0.0.1", 00:14:45.316 "trsvcid": "36630" 00:14:45.316 }, 00:14:45.316 "auth": { 00:14:45.316 "state": "completed", 00:14:45.316 "digest": "sha256", 00:14:45.316 "dhgroup": "ffdhe8192" 00:14:45.316 } 00:14:45.316 } 00:14:45.316 ]' 00:14:45.316 16:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.316 16:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.316 16:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.316 16:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:45.316 16:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.316 16:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.316 16:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.316 16:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.574 16:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:14:46.512 16:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.512 16:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:46.512 16:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.512 16:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.512 16:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.512 16:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.512 16:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:46.512 16:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:46.770 16:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:46.770 16:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.770 16:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:46.770 16:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:46.770 16:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:46.770 16:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.770 16:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.770 16:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.770 16:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.770 16:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.770 16:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.770 16:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.717 00:14:47.717 16:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.717 16:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.717 16:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.717 16:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.717 16:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.717 16:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.717 16:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.717 16:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.717 16:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.717 { 00:14:47.717 "cntlid": 43, 00:14:47.717 "qid": 0, 00:14:47.717 "state": "enabled", 00:14:47.717 "thread": "nvmf_tgt_poll_group_000", 00:14:47.718 "listen_address": { 00:14:47.718 "trtype": "TCP", 00:14:47.718 "adrfam": "IPv4", 00:14:47.718 "traddr": "10.0.0.2", 00:14:47.718 "trsvcid": "4420" 00:14:47.718 }, 00:14:47.718 "peer_address": { 00:14:47.718 "trtype": "TCP", 00:14:47.718 "adrfam": "IPv4", 00:14:47.718 "traddr": "10.0.0.1", 00:14:47.718 "trsvcid": "36662" 00:14:47.718 }, 00:14:47.718 "auth": { 00:14:47.718 "state": "completed", 00:14:47.718 "digest": "sha256", 00:14:47.718 "dhgroup": "ffdhe8192" 00:14:47.718 } 00:14:47.718 } 00:14:47.718 ]' 00:14:47.718 16:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.975 16:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.975 16:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.975 16:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:47.975 16:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.975 16:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.975 16:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.975 16:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.233 16:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:14:49.181 16:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.181 16:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:49.181 16:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.181 16:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.181 16:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.181 16:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.181 16:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:49.181 16:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:49.481 16:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:49.481 16:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.481 16:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.481 16:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:49.481 16:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:49.481 16:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.481 16:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.481 16:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.481 16:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.481 16:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.481 16:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.481 16:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.420 00:14:50.420 16:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.420 16:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.420 16:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.420 16:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.420 16:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.420 16:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.420 16:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.679 16:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.679 16:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.679 { 00:14:50.679 "cntlid": 45, 00:14:50.679 "qid": 0, 00:14:50.679 "state": "enabled", 00:14:50.679 "thread": "nvmf_tgt_poll_group_000", 00:14:50.679 "listen_address": { 00:14:50.679 "trtype": "TCP", 00:14:50.679 "adrfam": "IPv4", 00:14:50.679 "traddr": "10.0.0.2", 00:14:50.679 "trsvcid": "4420" 00:14:50.679 }, 00:14:50.679 "peer_address": { 00:14:50.679 "trtype": "TCP", 00:14:50.679 "adrfam": "IPv4", 00:14:50.679 "traddr": "10.0.0.1", 00:14:50.679 "trsvcid": "39548" 00:14:50.679 }, 00:14:50.679 "auth": { 00:14:50.679 "state": "completed", 00:14:50.679 "digest": "sha256", 00:14:50.679 "dhgroup": "ffdhe8192" 00:14:50.679 } 00:14:50.679 } 00:14:50.679 ]' 00:14:50.679 16:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.679 16:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.679 16:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.679 16:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:50.679 16:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.679 16:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.679 16:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.679 16:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.937 16:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:14:51.874 16:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.874 16:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:51.874 16:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.874 16:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.874 16:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.874 16:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.874 16:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:51.874 16:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:52.131 16:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:52.131 16:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.131 16:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:52.131 16:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:52.131 16:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:52.131 16:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.131 16:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:52.131 16:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.131 16:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.131 16:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.131 16:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.131 16:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:53.066 00:14:53.066 16:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.066 16:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.066 16:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.066 16:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.066 16:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.066 16:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.066 16:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.066 16:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.066 16:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.066 { 00:14:53.066 "cntlid": 47, 00:14:53.066 "qid": 0, 00:14:53.066 "state": "enabled", 00:14:53.066 "thread": "nvmf_tgt_poll_group_000", 00:14:53.066 "listen_address": { 00:14:53.066 "trtype": "TCP", 00:14:53.066 "adrfam": "IPv4", 00:14:53.066 "traddr": "10.0.0.2", 00:14:53.066 "trsvcid": "4420" 00:14:53.066 }, 00:14:53.066 "peer_address": { 00:14:53.066 "trtype": "TCP", 00:14:53.066 "adrfam": "IPv4", 00:14:53.066 "traddr": "10.0.0.1", 00:14:53.066 "trsvcid": "39578" 00:14:53.066 }, 00:14:53.066 "auth": { 00:14:53.066 "state": "completed", 00:14:53.066 "digest": "sha256", 00:14:53.066 "dhgroup": "ffdhe8192" 00:14:53.066 } 00:14:53.066 } 00:14:53.066 ]' 00:14:53.066 16:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:53.323 16:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.323 16:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.323 16:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:53.323 16:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.323 16:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.323 16:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.323 16:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.580 16:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:14:54.514 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.514 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:54.514 16:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.515 16:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.515 16:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.515 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:54.515 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:54.515 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.515 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:54.515 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:54.772 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:54.772 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.772 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:54.772 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:54.772 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:54.772 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.772 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.772 16:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.772 16:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.772 16:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.772 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.772 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.030 00:14:55.030 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.030 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.030 16:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:55.287 16:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.287 16:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.287 16:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.287 16:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.287 16:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.287 16:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.287 { 00:14:55.287 "cntlid": 49, 00:14:55.287 "qid": 0, 00:14:55.287 "state": "enabled", 00:14:55.287 "thread": "nvmf_tgt_poll_group_000", 00:14:55.287 "listen_address": { 00:14:55.287 "trtype": "TCP", 00:14:55.287 "adrfam": "IPv4", 00:14:55.287 "traddr": "10.0.0.2", 00:14:55.287 "trsvcid": "4420" 00:14:55.287 }, 00:14:55.287 "peer_address": { 00:14:55.287 "trtype": "TCP", 00:14:55.287 "adrfam": "IPv4", 00:14:55.287 "traddr": "10.0.0.1", 00:14:55.287 "trsvcid": "39620" 00:14:55.287 }, 00:14:55.287 "auth": { 00:14:55.287 "state": "completed", 00:14:55.287 "digest": "sha384", 00:14:55.287 "dhgroup": "null" 00:14:55.287 } 00:14:55.287 } 00:14:55.287 ]' 00:14:55.287 16:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.287 16:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:55.287 16:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.287 16:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:55.287 16:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.287 16:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.287 16:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.287 16:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.545 16:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:14:56.481 16:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.481 16:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:56.481 16:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.481 16:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.481 16:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.481 16:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:56.481 16:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:56.481 16:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:56.739 16:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:56.739 16:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.739 16:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:56.739 16:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:56.739 16:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:56.739 16:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.739 16:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.739 16:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.739 16:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.739 16:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.739 16:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.739 16:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.997 00:14:56.997 16:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.997 16:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.997 16:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.255 16:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.255 16:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.255 16:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.255 16:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.255 16:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.255 16:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.255 { 00:14:57.255 "cntlid": 51, 00:14:57.255 "qid": 0, 00:14:57.255 "state": "enabled", 00:14:57.255 "thread": "nvmf_tgt_poll_group_000", 00:14:57.255 "listen_address": { 00:14:57.255 "trtype": "TCP", 00:14:57.255 "adrfam": "IPv4", 00:14:57.255 "traddr": "10.0.0.2", 00:14:57.255 "trsvcid": "4420" 00:14:57.255 }, 00:14:57.255 "peer_address": { 00:14:57.255 "trtype": "TCP", 00:14:57.255 "adrfam": "IPv4", 00:14:57.255 "traddr": "10.0.0.1", 00:14:57.255 "trsvcid": "39628" 00:14:57.255 }, 00:14:57.255 "auth": { 00:14:57.255 "state": "completed", 00:14:57.255 "digest": "sha384", 00:14:57.255 "dhgroup": "null" 00:14:57.255 } 00:14:57.255 } 00:14:57.255 ]' 00:14:57.255 16:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:57.255 16:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:57.255 16:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:57.255 16:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:57.255 16:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:57.512 16:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.512 16:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.512 16:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.770 16:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:14:58.706 16:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.706 16:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:58.706 16:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.706 16:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.706 16:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.706 16:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.706 16:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:58.706 16:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:58.965 16:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:58.965 16:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.965 16:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:58.965 16:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:58.965 16:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:58.965 16:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.965 16:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.965 16:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.965 16:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.965 16:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.965 16:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.965 16:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.223 00:14:59.223 16:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.223 16:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.223 16:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.482 16:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.482 16:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.482 16:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.482 16:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.482 16:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.482 16:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.482 { 00:14:59.482 "cntlid": 53, 00:14:59.482 "qid": 0, 00:14:59.482 "state": "enabled", 00:14:59.482 "thread": "nvmf_tgt_poll_group_000", 00:14:59.482 "listen_address": { 00:14:59.482 "trtype": "TCP", 00:14:59.482 "adrfam": "IPv4", 00:14:59.482 "traddr": "10.0.0.2", 00:14:59.482 "trsvcid": "4420" 00:14:59.482 }, 00:14:59.482 "peer_address": { 00:14:59.482 "trtype": "TCP", 00:14:59.482 "adrfam": "IPv4", 00:14:59.482 "traddr": "10.0.0.1", 00:14:59.482 "trsvcid": "39668" 00:14:59.482 }, 00:14:59.482 "auth": { 00:14:59.482 "state": "completed", 00:14:59.482 "digest": "sha384", 00:14:59.482 "dhgroup": "null" 00:14:59.482 } 00:14:59.482 } 00:14:59.482 ]' 00:14:59.482 16:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.482 16:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:59.482 16:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.482 16:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:59.482 16:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.482 16:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.482 16:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.482 16:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.049 16:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:15:00.619 16:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.619 16:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:00.619 16:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.619 16:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.877 16:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.877 16:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.877 16:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:00.877 16:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:01.136 16:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:01.136 16:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.136 16:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:01.136 16:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:01.136 16:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:01.136 16:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.136 16:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:01.136 16:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.136 16:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.136 16:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.136 16:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:01.136 16:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:01.393 00:15:01.393 16:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.393 16:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.393 16:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.650 16:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.650 16:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.650 16:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.650 16:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.650 16:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.650 16:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.650 { 00:15:01.650 "cntlid": 55, 00:15:01.650 "qid": 0, 00:15:01.650 "state": "enabled", 00:15:01.650 "thread": "nvmf_tgt_poll_group_000", 00:15:01.650 "listen_address": { 00:15:01.650 "trtype": "TCP", 00:15:01.650 "adrfam": "IPv4", 00:15:01.650 "traddr": "10.0.0.2", 00:15:01.650 "trsvcid": "4420" 00:15:01.650 }, 00:15:01.650 "peer_address": { 00:15:01.650 "trtype": "TCP", 00:15:01.650 "adrfam": "IPv4", 00:15:01.650 "traddr": "10.0.0.1", 00:15:01.650 "trsvcid": "37612" 00:15:01.650 }, 00:15:01.650 "auth": { 00:15:01.650 "state": "completed", 00:15:01.650 "digest": "sha384", 00:15:01.650 "dhgroup": "null" 00:15:01.650 } 00:15:01.650 } 00:15:01.650 ]' 00:15:01.650 16:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.650 16:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:01.650 16:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.650 16:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:01.650 16:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.908 16:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.908 16:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.908 16:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.908 16:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:15:02.842 16:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.842 16:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:02.842 16:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.842 16:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.842 16:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.842 16:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:02.842 16:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.842 16:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:02.842 16:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:03.100 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:03.100 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.100 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:03.100 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:03.100 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:03.100 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.100 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.100 16:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.100 16:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.100 16:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.100 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.100 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.667 00:15:03.667 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.667 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:03.667 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.667 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.667 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.667 16:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.667 16:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.667 16:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.667 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.667 { 00:15:03.667 "cntlid": 57, 00:15:03.667 "qid": 0, 00:15:03.667 "state": "enabled", 00:15:03.667 "thread": "nvmf_tgt_poll_group_000", 00:15:03.667 "listen_address": { 00:15:03.667 "trtype": "TCP", 00:15:03.667 "adrfam": "IPv4", 00:15:03.667 "traddr": "10.0.0.2", 00:15:03.667 "trsvcid": "4420" 00:15:03.667 }, 00:15:03.667 "peer_address": { 00:15:03.667 "trtype": "TCP", 00:15:03.667 "adrfam": "IPv4", 00:15:03.667 "traddr": "10.0.0.1", 00:15:03.667 "trsvcid": "37646" 00:15:03.667 }, 00:15:03.667 "auth": { 00:15:03.667 "state": "completed", 00:15:03.667 "digest": "sha384", 00:15:03.667 "dhgroup": "ffdhe2048" 00:15:03.667 } 00:15:03.667 } 00:15:03.667 ]' 00:15:03.667 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.925 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:03.925 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.925 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:03.925 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.925 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.925 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.925 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.183 16:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:15:05.117 16:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.117 16:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:05.117 16:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.117 16:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.117 16:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.117 16:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.117 16:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:05.117 16:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:05.117 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:05.117 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.117 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:05.117 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:05.117 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:05.117 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.117 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.118 16:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.118 16:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.118 16:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.118 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.118 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.686 00:15:05.686 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.686 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.686 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.952 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.952 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.952 16:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.952 16:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.952 16:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.952 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.952 { 00:15:05.952 "cntlid": 59, 00:15:05.952 "qid": 0, 00:15:05.952 "state": "enabled", 00:15:05.952 "thread": "nvmf_tgt_poll_group_000", 00:15:05.952 "listen_address": { 00:15:05.952 "trtype": "TCP", 00:15:05.952 "adrfam": "IPv4", 00:15:05.952 "traddr": "10.0.0.2", 00:15:05.952 "trsvcid": "4420" 00:15:05.952 }, 00:15:05.952 "peer_address": { 00:15:05.952 "trtype": "TCP", 00:15:05.952 "adrfam": "IPv4", 00:15:05.952 "traddr": "10.0.0.1", 00:15:05.952 "trsvcid": "37678" 00:15:05.952 }, 00:15:05.952 "auth": { 00:15:05.952 "state": "completed", 00:15:05.952 "digest": "sha384", 00:15:05.952 "dhgroup": "ffdhe2048" 00:15:05.952 } 00:15:05.952 } 00:15:05.952 ]' 00:15:05.952 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.952 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:05.952 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.952 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:05.952 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.952 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.952 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.952 16:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.242 16:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:15:07.176 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.176 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:07.176 16:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.176 16:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.176 16:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.176 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.176 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:07.176 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:07.433 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:07.433 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.433 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:07.433 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:07.433 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:07.433 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.433 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.433 16:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.433 16:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.433 16:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.433 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.433 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.690 00:15:07.690 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.690 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.690 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.947 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.947 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.947 16:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.947 16:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.947 16:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.947 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.947 { 00:15:07.947 "cntlid": 61, 00:15:07.947 "qid": 0, 00:15:07.947 "state": "enabled", 00:15:07.947 "thread": "nvmf_tgt_poll_group_000", 00:15:07.947 "listen_address": { 00:15:07.947 "trtype": "TCP", 00:15:07.947 "adrfam": "IPv4", 00:15:07.947 "traddr": "10.0.0.2", 00:15:07.947 "trsvcid": "4420" 00:15:07.947 }, 00:15:07.947 "peer_address": { 00:15:07.947 "trtype": "TCP", 00:15:07.947 "adrfam": "IPv4", 00:15:07.947 "traddr": "10.0.0.1", 00:15:07.947 "trsvcid": "37704" 00:15:07.947 }, 00:15:07.947 "auth": { 00:15:07.947 "state": "completed", 00:15:07.947 "digest": "sha384", 00:15:07.947 "dhgroup": "ffdhe2048" 00:15:07.947 } 00:15:07.947 } 00:15:07.947 ]' 00:15:07.947 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.205 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.205 16:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.205 16:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:08.205 16:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.205 16:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.205 16:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.205 16:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.462 16:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:15:09.396 16:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.396 16:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:09.396 16:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.396 16:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.396 16:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.396 16:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.396 16:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:09.396 16:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:09.655 16:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:09.655 16:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.655 16:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:09.655 16:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:09.655 16:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:09.655 16:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.655 16:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:09.655 16:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.655 16:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.655 16:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.655 16:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.655 16:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.913 00:15:09.913 16:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.913 16:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.913 16:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.170 16:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.170 16:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.170 16:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.170 16:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.170 16:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.170 16:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.170 { 00:15:10.170 "cntlid": 63, 00:15:10.170 "qid": 0, 00:15:10.170 "state": "enabled", 00:15:10.170 "thread": "nvmf_tgt_poll_group_000", 00:15:10.170 "listen_address": { 00:15:10.170 "trtype": "TCP", 00:15:10.170 "adrfam": "IPv4", 00:15:10.170 "traddr": "10.0.0.2", 00:15:10.170 "trsvcid": "4420" 00:15:10.170 }, 00:15:10.170 "peer_address": { 00:15:10.170 "trtype": "TCP", 00:15:10.170 "adrfam": "IPv4", 00:15:10.170 "traddr": "10.0.0.1", 00:15:10.170 "trsvcid": "38390" 00:15:10.170 }, 00:15:10.170 "auth": { 00:15:10.170 "state": "completed", 00:15:10.170 "digest": "sha384", 00:15:10.170 "dhgroup": "ffdhe2048" 00:15:10.170 } 00:15:10.170 } 00:15:10.170 ]' 00:15:10.170 16:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.170 16:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.170 16:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.170 16:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:10.170 16:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.170 16:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.170 16:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.170 16:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.426 16:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:15:11.356 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.356 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:11.356 16:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.356 16:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.356 16:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.356 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:11.356 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.356 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:11.356 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:11.612 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:11.612 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.612 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:11.612 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:11.612 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:11.612 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.612 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.612 16:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.612 16:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.612 16:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.612 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.612 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.869 00:15:11.869 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.869 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.869 16:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.127 16:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.127 16:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.127 16:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.127 16:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.127 16:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.127 16:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.127 { 00:15:12.127 "cntlid": 65, 00:15:12.127 "qid": 0, 00:15:12.127 "state": "enabled", 00:15:12.127 "thread": "nvmf_tgt_poll_group_000", 00:15:12.127 "listen_address": { 00:15:12.127 "trtype": "TCP", 00:15:12.127 "adrfam": "IPv4", 00:15:12.127 "traddr": "10.0.0.2", 00:15:12.127 "trsvcid": "4420" 00:15:12.127 }, 00:15:12.127 "peer_address": { 00:15:12.127 "trtype": "TCP", 00:15:12.127 "adrfam": "IPv4", 00:15:12.127 "traddr": "10.0.0.1", 00:15:12.127 "trsvcid": "38414" 00:15:12.127 }, 00:15:12.127 "auth": { 00:15:12.127 "state": "completed", 00:15:12.127 "digest": "sha384", 00:15:12.127 "dhgroup": "ffdhe3072" 00:15:12.127 } 00:15:12.127 } 00:15:12.127 ]' 00:15:12.127 16:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.385 16:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.385 16:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.385 16:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:12.385 16:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.386 16:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.386 16:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.386 16:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.644 16:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:15:13.575 16:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.575 16:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:13.575 16:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.575 16:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.575 16:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.575 16:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.575 16:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:13.575 16:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:13.832 16:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:13.832 16:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.832 16:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:13.832 16:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:13.832 16:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:13.832 16:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.832 16:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.832 16:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.832 16:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.832 16:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.832 16:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.832 16:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.089 00:15:14.089 16:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.089 16:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.089 16:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.346 16:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.346 16:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.346 16:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.346 16:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.346 16:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.346 16:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.346 { 00:15:14.346 "cntlid": 67, 00:15:14.346 "qid": 0, 00:15:14.346 "state": "enabled", 00:15:14.346 "thread": "nvmf_tgt_poll_group_000", 00:15:14.346 "listen_address": { 00:15:14.346 "trtype": "TCP", 00:15:14.346 "adrfam": "IPv4", 00:15:14.346 "traddr": "10.0.0.2", 00:15:14.346 "trsvcid": "4420" 00:15:14.346 }, 00:15:14.346 "peer_address": { 00:15:14.346 "trtype": "TCP", 00:15:14.346 "adrfam": "IPv4", 00:15:14.346 "traddr": "10.0.0.1", 00:15:14.346 "trsvcid": "38438" 00:15:14.346 }, 00:15:14.346 "auth": { 00:15:14.346 "state": "completed", 00:15:14.346 "digest": "sha384", 00:15:14.346 "dhgroup": "ffdhe3072" 00:15:14.346 } 00:15:14.346 } 00:15:14.346 ]' 00:15:14.346 16:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.346 16:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.346 16:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.346 16:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:14.346 16:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.346 16:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.346 16:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.346 16:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.603 16:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:15:15.538 16:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.538 16:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:15.538 16:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.538 16:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.538 16:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.538 16:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.538 16:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:15.538 16:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:15.795 16:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:15.795 16:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.796 16:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:15.796 16:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:15.796 16:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:15.796 16:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.796 16:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.796 16:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.796 16:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.796 16:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.796 16:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.796 16:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.360 00:15:16.360 16:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.360 16:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.360 16:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.360 16:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.360 16:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.360 16:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.360 16:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.360 16:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.360 16:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.360 { 00:15:16.360 "cntlid": 69, 00:15:16.360 "qid": 0, 00:15:16.360 "state": "enabled", 00:15:16.360 "thread": "nvmf_tgt_poll_group_000", 00:15:16.360 "listen_address": { 00:15:16.360 "trtype": "TCP", 00:15:16.360 "adrfam": "IPv4", 00:15:16.360 "traddr": "10.0.0.2", 00:15:16.360 "trsvcid": "4420" 00:15:16.360 }, 00:15:16.360 "peer_address": { 00:15:16.360 "trtype": "TCP", 00:15:16.360 "adrfam": "IPv4", 00:15:16.360 "traddr": "10.0.0.1", 00:15:16.360 "trsvcid": "38462" 00:15:16.360 }, 00:15:16.360 "auth": { 00:15:16.360 "state": "completed", 00:15:16.360 "digest": "sha384", 00:15:16.360 "dhgroup": "ffdhe3072" 00:15:16.360 } 00:15:16.360 } 00:15:16.360 ]' 00:15:16.360 16:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.617 16:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.617 16:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.617 16:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:16.617 16:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.617 16:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.617 16:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.617 16:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.874 16:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:15:17.805 16:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.805 16:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:17.805 16:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.805 16:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.805 16:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.805 16:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.805 16:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:17.805 16:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:18.063 16:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:18.063 16:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.063 16:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:18.063 16:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:18.063 16:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:18.063 16:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.063 16:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:18.063 16:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.063 16:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.063 16:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.063 16:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:18.063 16:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:18.321 00:15:18.321 16:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.321 16:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.321 16:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.579 16:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.579 16:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.579 16:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.579 16:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.579 16:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.579 16:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.579 { 00:15:18.580 "cntlid": 71, 00:15:18.580 "qid": 0, 00:15:18.580 "state": "enabled", 00:15:18.580 "thread": "nvmf_tgt_poll_group_000", 00:15:18.580 "listen_address": { 00:15:18.580 "trtype": "TCP", 00:15:18.580 "adrfam": "IPv4", 00:15:18.580 "traddr": "10.0.0.2", 00:15:18.580 "trsvcid": "4420" 00:15:18.580 }, 00:15:18.580 "peer_address": { 00:15:18.580 "trtype": "TCP", 00:15:18.580 "adrfam": "IPv4", 00:15:18.580 "traddr": "10.0.0.1", 00:15:18.580 "trsvcid": "38488" 00:15:18.580 }, 00:15:18.580 "auth": { 00:15:18.580 "state": "completed", 00:15:18.580 "digest": "sha384", 00:15:18.580 "dhgroup": "ffdhe3072" 00:15:18.580 } 00:15:18.580 } 00:15:18.580 ]' 00:15:18.580 16:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.580 16:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.580 16:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.580 16:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:18.580 16:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.580 16:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.580 16:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.580 16:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.837 16:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:15:19.773 16:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.773 16:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:19.773 16:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.773 16:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.773 16:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.773 16:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:19.773 16:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.773 16:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:19.773 16:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:20.032 16:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:20.032 16:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.032 16:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:20.032 16:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:20.032 16:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:20.032 16:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.032 16:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.032 16:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.032 16:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.032 16:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.032 16:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.032 16:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.291 00:15:20.549 16:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.549 16:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.549 16:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.549 16:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.549 16:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.549 16:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.549 16:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.807 16:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.807 16:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.807 { 00:15:20.807 "cntlid": 73, 00:15:20.807 "qid": 0, 00:15:20.807 "state": "enabled", 00:15:20.807 "thread": "nvmf_tgt_poll_group_000", 00:15:20.807 "listen_address": { 00:15:20.807 "trtype": "TCP", 00:15:20.807 "adrfam": "IPv4", 00:15:20.807 "traddr": "10.0.0.2", 00:15:20.807 "trsvcid": "4420" 00:15:20.807 }, 00:15:20.807 "peer_address": { 00:15:20.807 "trtype": "TCP", 00:15:20.807 "adrfam": "IPv4", 00:15:20.807 "traddr": "10.0.0.1", 00:15:20.807 "trsvcid": "39720" 00:15:20.807 }, 00:15:20.807 "auth": { 00:15:20.807 "state": "completed", 00:15:20.807 "digest": "sha384", 00:15:20.807 "dhgroup": "ffdhe4096" 00:15:20.807 } 00:15:20.807 } 00:15:20.807 ]' 00:15:20.807 16:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.807 16:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.807 16:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.807 16:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:20.807 16:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.807 16:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.807 16:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.807 16:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.064 16:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:15:22.030 16:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.030 16:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:22.030 16:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.030 16:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.030 16:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.030 16:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.030 16:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:22.030 16:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:22.289 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:22.289 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.289 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.289 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:22.289 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:22.289 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.289 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.289 16:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.289 16:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.289 16:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.289 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.289 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.546 00:15:22.546 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.546 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.546 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.804 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.804 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.804 16:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.804 16:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.804 16:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.804 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.804 { 00:15:22.804 "cntlid": 75, 00:15:22.804 "qid": 0, 00:15:22.804 "state": "enabled", 00:15:22.804 "thread": "nvmf_tgt_poll_group_000", 00:15:22.804 "listen_address": { 00:15:22.804 "trtype": "TCP", 00:15:22.804 "adrfam": "IPv4", 00:15:22.804 "traddr": "10.0.0.2", 00:15:22.804 "trsvcid": "4420" 00:15:22.804 }, 00:15:22.804 "peer_address": { 00:15:22.804 "trtype": "TCP", 00:15:22.804 "adrfam": "IPv4", 00:15:22.804 "traddr": "10.0.0.1", 00:15:22.804 "trsvcid": "39740" 00:15:22.804 }, 00:15:22.804 "auth": { 00:15:22.804 "state": "completed", 00:15:22.804 "digest": "sha384", 00:15:22.804 "dhgroup": "ffdhe4096" 00:15:22.804 } 00:15:22.804 } 00:15:22.804 ]' 00:15:22.804 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.804 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.804 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.062 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:23.062 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.062 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.062 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.062 16:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.319 16:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:15:24.252 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.252 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:24.252 16:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.252 16:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.252 16:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.252 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.253 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:24.253 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:24.510 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:24.510 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.510 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:24.510 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:24.510 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:24.510 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.510 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.510 16:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.510 16:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.510 16:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.510 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.510 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.768 00:15:24.768 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.768 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.768 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.026 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.026 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.026 16:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.026 16:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.026 16:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.026 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:25.026 { 00:15:25.026 "cntlid": 77, 00:15:25.026 "qid": 0, 00:15:25.026 "state": "enabled", 00:15:25.026 "thread": "nvmf_tgt_poll_group_000", 00:15:25.026 "listen_address": { 00:15:25.026 "trtype": "TCP", 00:15:25.026 "adrfam": "IPv4", 00:15:25.026 "traddr": "10.0.0.2", 00:15:25.026 "trsvcid": "4420" 00:15:25.026 }, 00:15:25.026 "peer_address": { 00:15:25.026 "trtype": "TCP", 00:15:25.026 "adrfam": "IPv4", 00:15:25.026 "traddr": "10.0.0.1", 00:15:25.026 "trsvcid": "39778" 00:15:25.026 }, 00:15:25.026 "auth": { 00:15:25.026 "state": "completed", 00:15:25.026 "digest": "sha384", 00:15:25.026 "dhgroup": "ffdhe4096" 00:15:25.026 } 00:15:25.026 } 00:15:25.026 ]' 00:15:25.026 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:25.026 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.026 16:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.026 16:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:25.285 16:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.285 16:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.285 16:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.285 16:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.543 16:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:15:26.479 16:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.479 16:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:26.480 16:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:27.045 00:15:27.045 16:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:27.045 16:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:27.045 16:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.303 16:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.303 16:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.303 16:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.303 16:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.303 16:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.303 16:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:27.303 { 00:15:27.303 "cntlid": 79, 00:15:27.303 "qid": 0, 00:15:27.303 "state": "enabled", 00:15:27.303 "thread": "nvmf_tgt_poll_group_000", 00:15:27.303 "listen_address": { 00:15:27.303 "trtype": "TCP", 00:15:27.303 "adrfam": "IPv4", 00:15:27.303 "traddr": "10.0.0.2", 00:15:27.303 "trsvcid": "4420" 00:15:27.303 }, 00:15:27.303 "peer_address": { 00:15:27.303 "trtype": "TCP", 00:15:27.303 "adrfam": "IPv4", 00:15:27.303 "traddr": "10.0.0.1", 00:15:27.303 "trsvcid": "39806" 00:15:27.303 }, 00:15:27.303 "auth": { 00:15:27.303 "state": "completed", 00:15:27.303 "digest": "sha384", 00:15:27.303 "dhgroup": "ffdhe4096" 00:15:27.303 } 00:15:27.303 } 00:15:27.303 ]' 00:15:27.303 16:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:27.303 16:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.303 16:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:27.303 16:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:27.303 16:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:27.303 16:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.303 16:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.303 16:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.560 16:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:15:28.493 16:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.493 16:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:28.493 16:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.493 16:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.493 16:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.493 16:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.493 16:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:28.493 16:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:28.493 16:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:28.751 16:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:28.751 16:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.751 16:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:28.751 16:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:28.751 16:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:28.751 16:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.751 16:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.751 16:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.751 16:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.751 16:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.751 16:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.751 16:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.320 00:15:29.320 16:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.320 16:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.320 16:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.578 16:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.578 16:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.578 16:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.578 16:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.578 16:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.578 16:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.578 { 00:15:29.578 "cntlid": 81, 00:15:29.578 "qid": 0, 00:15:29.578 "state": "enabled", 00:15:29.578 "thread": "nvmf_tgt_poll_group_000", 00:15:29.578 "listen_address": { 00:15:29.578 "trtype": "TCP", 00:15:29.578 "adrfam": "IPv4", 00:15:29.578 "traddr": "10.0.0.2", 00:15:29.578 "trsvcid": "4420" 00:15:29.578 }, 00:15:29.578 "peer_address": { 00:15:29.578 "trtype": "TCP", 00:15:29.578 "adrfam": "IPv4", 00:15:29.578 "traddr": "10.0.0.1", 00:15:29.578 "trsvcid": "39844" 00:15:29.578 }, 00:15:29.578 "auth": { 00:15:29.578 "state": "completed", 00:15:29.578 "digest": "sha384", 00:15:29.578 "dhgroup": "ffdhe6144" 00:15:29.578 } 00:15:29.578 } 00:15:29.578 ]' 00:15:29.578 16:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.578 16:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.578 16:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.578 16:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:29.578 16:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.578 16:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.578 16:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.578 16:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.836 16:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:15:30.771 16:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.771 16:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:30.771 16:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.771 16:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.771 16:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.771 16:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.771 16:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:30.771 16:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:31.027 16:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:31.027 16:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.027 16:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:31.027 16:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:31.027 16:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:31.027 16:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.028 16:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.028 16:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.028 16:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.028 16:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.028 16:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.028 16:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.590 00:15:31.590 16:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:31.590 16:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.590 16:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:31.846 16:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.846 16:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.846 16:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.846 16:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.846 16:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.846 16:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.846 { 00:15:31.846 "cntlid": 83, 00:15:31.846 "qid": 0, 00:15:31.846 "state": "enabled", 00:15:31.846 "thread": "nvmf_tgt_poll_group_000", 00:15:31.846 "listen_address": { 00:15:31.846 "trtype": "TCP", 00:15:31.846 "adrfam": "IPv4", 00:15:31.846 "traddr": "10.0.0.2", 00:15:31.846 "trsvcid": "4420" 00:15:31.846 }, 00:15:31.846 "peer_address": { 00:15:31.846 "trtype": "TCP", 00:15:31.846 "adrfam": "IPv4", 00:15:31.846 "traddr": "10.0.0.1", 00:15:31.846 "trsvcid": "56856" 00:15:31.846 }, 00:15:31.846 "auth": { 00:15:31.846 "state": "completed", 00:15:31.846 "digest": "sha384", 00:15:31.846 "dhgroup": "ffdhe6144" 00:15:31.846 } 00:15:31.846 } 00:15:31.846 ]' 00:15:31.846 16:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.846 16:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.846 16:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.846 16:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:31.846 16:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.846 16:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.846 16:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.846 16:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.102 16:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:15:33.036 16:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.036 16:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:33.036 16:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.036 16:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.036 16:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.036 16:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.036 16:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:33.036 16:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:33.293 16:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:33.293 16:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.293 16:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:33.293 16:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:33.293 16:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:33.293 16:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.293 16:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.294 16:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.294 16:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.294 16:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.294 16:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.294 16:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.859 00:15:33.859 16:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.859 16:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.859 16:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.117 16:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.117 16:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.117 16:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.117 16:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.117 16:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.117 16:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.117 { 00:15:34.117 "cntlid": 85, 00:15:34.117 "qid": 0, 00:15:34.117 "state": "enabled", 00:15:34.117 "thread": "nvmf_tgt_poll_group_000", 00:15:34.117 "listen_address": { 00:15:34.117 "trtype": "TCP", 00:15:34.117 "adrfam": "IPv4", 00:15:34.117 "traddr": "10.0.0.2", 00:15:34.117 "trsvcid": "4420" 00:15:34.117 }, 00:15:34.117 "peer_address": { 00:15:34.117 "trtype": "TCP", 00:15:34.117 "adrfam": "IPv4", 00:15:34.117 "traddr": "10.0.0.1", 00:15:34.117 "trsvcid": "56874" 00:15:34.117 }, 00:15:34.117 "auth": { 00:15:34.117 "state": "completed", 00:15:34.117 "digest": "sha384", 00:15:34.117 "dhgroup": "ffdhe6144" 00:15:34.117 } 00:15:34.117 } 00:15:34.117 ]' 00:15:34.117 16:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.117 16:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.117 16:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.117 16:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:34.117 16:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.375 16:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.375 16:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.375 16:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.649 16:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:15:35.588 16:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.588 16:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:35.588 16:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.588 16:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.588 16:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.588 16:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.588 16:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:35.588 16:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:35.846 16:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:35.846 16:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.846 16:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:35.846 16:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:35.846 16:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:35.846 16:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.846 16:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:35.846 16:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.846 16:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.846 16:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.846 16:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:35.846 16:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:36.413 00:15:36.413 16:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.413 16:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.413 16:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.413 16:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.413 16:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.413 16:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.413 16:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.413 16:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.413 16:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.413 { 00:15:36.413 "cntlid": 87, 00:15:36.413 "qid": 0, 00:15:36.413 "state": "enabled", 00:15:36.413 "thread": "nvmf_tgt_poll_group_000", 00:15:36.413 "listen_address": { 00:15:36.413 "trtype": "TCP", 00:15:36.413 "adrfam": "IPv4", 00:15:36.413 "traddr": "10.0.0.2", 00:15:36.413 "trsvcid": "4420" 00:15:36.413 }, 00:15:36.413 "peer_address": { 00:15:36.413 "trtype": "TCP", 00:15:36.413 "adrfam": "IPv4", 00:15:36.413 "traddr": "10.0.0.1", 00:15:36.413 "trsvcid": "56902" 00:15:36.413 }, 00:15:36.413 "auth": { 00:15:36.413 "state": "completed", 00:15:36.413 "digest": "sha384", 00:15:36.413 "dhgroup": "ffdhe6144" 00:15:36.413 } 00:15:36.413 } 00:15:36.413 ]' 00:15:36.413 16:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.671 16:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.671 16:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.671 16:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:36.671 16:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.671 16:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.671 16:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.671 16:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.928 16:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:15:37.863 16:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.863 16:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:37.863 16:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.863 16:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.863 16:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.863 16:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:37.863 16:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.863 16:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:37.863 16:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:38.122 16:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:38.122 16:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.122 16:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:38.122 16:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:38.122 16:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:38.122 16:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.122 16:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.122 16:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.122 16:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.122 16:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.122 16:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.122 16:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.090 00:15:39.090 16:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.090 16:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.090 16:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.090 16:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.090 16:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.090 16:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.090 16:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.090 16:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.090 16:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.090 { 00:15:39.090 "cntlid": 89, 00:15:39.090 "qid": 0, 00:15:39.090 "state": "enabled", 00:15:39.090 "thread": "nvmf_tgt_poll_group_000", 00:15:39.090 "listen_address": { 00:15:39.090 "trtype": "TCP", 00:15:39.090 "adrfam": "IPv4", 00:15:39.090 "traddr": "10.0.0.2", 00:15:39.090 "trsvcid": "4420" 00:15:39.090 }, 00:15:39.090 "peer_address": { 00:15:39.090 "trtype": "TCP", 00:15:39.090 "adrfam": "IPv4", 00:15:39.090 "traddr": "10.0.0.1", 00:15:39.090 "trsvcid": "56932" 00:15:39.090 }, 00:15:39.090 "auth": { 00:15:39.090 "state": "completed", 00:15:39.090 "digest": "sha384", 00:15:39.090 "dhgroup": "ffdhe8192" 00:15:39.090 } 00:15:39.090 } 00:15:39.090 ]' 00:15:39.090 16:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.090 16:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.090 16:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.090 16:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:39.090 16:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.348 16:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.348 16:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.348 16:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.606 16:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:15:40.541 16:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.541 16:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:40.541 16:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.541 16:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.541 16:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.541 16:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.541 16:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.541 16:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.799 16:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:40.799 16:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.799 16:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:40.799 16:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:40.799 16:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:40.799 16:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.799 16:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.799 16:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.799 16:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.799 16:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.799 16:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.799 16:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.737 00:15:41.737 16:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.737 16:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.737 16:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.737 16:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.737 16:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.737 16:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.737 16:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.737 16:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.737 16:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.737 { 00:15:41.737 "cntlid": 91, 00:15:41.737 "qid": 0, 00:15:41.737 "state": "enabled", 00:15:41.737 "thread": "nvmf_tgt_poll_group_000", 00:15:41.737 "listen_address": { 00:15:41.737 "trtype": "TCP", 00:15:41.737 "adrfam": "IPv4", 00:15:41.737 "traddr": "10.0.0.2", 00:15:41.737 "trsvcid": "4420" 00:15:41.737 }, 00:15:41.737 "peer_address": { 00:15:41.737 "trtype": "TCP", 00:15:41.737 "adrfam": "IPv4", 00:15:41.737 "traddr": "10.0.0.1", 00:15:41.737 "trsvcid": "58850" 00:15:41.737 }, 00:15:41.737 "auth": { 00:15:41.737 "state": "completed", 00:15:41.737 "digest": "sha384", 00:15:41.737 "dhgroup": "ffdhe8192" 00:15:41.737 } 00:15:41.737 } 00:15:41.737 ]' 00:15:41.737 16:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.737 16:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.737 16:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.995 16:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:41.995 16:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.995 16:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.995 16:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.995 16:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.252 16:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:15:43.190 16:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.190 16:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:43.190 16:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.190 16:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.190 16:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.190 16:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.190 16:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.190 16:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.448 16:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:43.448 16:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.448 16:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:43.448 16:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:43.448 16:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:43.448 16:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.448 16:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.448 16:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.448 16:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.448 16:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.448 16:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.449 16:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.385 00:15:44.385 16:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.385 16:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.385 16:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.385 16:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.385 16:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.385 16:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.385 16:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.385 16:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.385 16:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.385 { 00:15:44.385 "cntlid": 93, 00:15:44.385 "qid": 0, 00:15:44.385 "state": "enabled", 00:15:44.385 "thread": "nvmf_tgt_poll_group_000", 00:15:44.385 "listen_address": { 00:15:44.385 "trtype": "TCP", 00:15:44.385 "adrfam": "IPv4", 00:15:44.385 "traddr": "10.0.0.2", 00:15:44.385 "trsvcid": "4420" 00:15:44.385 }, 00:15:44.385 "peer_address": { 00:15:44.385 "trtype": "TCP", 00:15:44.385 "adrfam": "IPv4", 00:15:44.385 "traddr": "10.0.0.1", 00:15:44.385 "trsvcid": "58870" 00:15:44.385 }, 00:15:44.385 "auth": { 00:15:44.385 "state": "completed", 00:15:44.385 "digest": "sha384", 00:15:44.385 "dhgroup": "ffdhe8192" 00:15:44.385 } 00:15:44.385 } 00:15:44.385 ]' 00:15:44.385 16:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.385 16:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.385 16:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.643 16:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:44.643 16:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.643 16:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.643 16:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.643 16:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.901 16:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:15:45.837 16:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.837 16:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:45.837 16:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.837 16:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.837 16:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.837 16:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.837 16:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:45.837 16:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:46.095 16:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:46.095 16:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.095 16:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:46.095 16:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:46.095 16:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:46.095 16:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.095 16:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:46.095 16:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.095 16:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.095 16:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.095 16:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:46.095 16:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:47.035 00:15:47.035 16:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.035 16:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.035 16:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.035 16:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.035 16:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.035 16:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.035 16:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.307 16:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.307 16:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.307 { 00:15:47.307 "cntlid": 95, 00:15:47.307 "qid": 0, 00:15:47.307 "state": "enabled", 00:15:47.307 "thread": "nvmf_tgt_poll_group_000", 00:15:47.307 "listen_address": { 00:15:47.307 "trtype": "TCP", 00:15:47.307 "adrfam": "IPv4", 00:15:47.307 "traddr": "10.0.0.2", 00:15:47.307 "trsvcid": "4420" 00:15:47.307 }, 00:15:47.307 "peer_address": { 00:15:47.307 "trtype": "TCP", 00:15:47.307 "adrfam": "IPv4", 00:15:47.307 "traddr": "10.0.0.1", 00:15:47.307 "trsvcid": "58890" 00:15:47.307 }, 00:15:47.307 "auth": { 00:15:47.307 "state": "completed", 00:15:47.307 "digest": "sha384", 00:15:47.307 "dhgroup": "ffdhe8192" 00:15:47.307 } 00:15:47.307 } 00:15:47.307 ]' 00:15:47.307 16:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.307 16:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.307 16:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.307 16:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:47.307 16:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.307 16:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.307 16:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.307 16:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.572 16:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:15:48.508 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.508 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:48.508 16:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.508 16:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.508 16:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.508 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:48.508 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.508 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.508 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:48.508 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:48.764 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:48.764 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.764 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:48.764 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:48.764 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:48.764 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.764 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.764 16:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.764 16:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.764 16:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.764 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.764 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.022 00:15:49.022 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.022 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.022 16:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.279 16:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.279 16:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.279 16:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.279 16:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.279 16:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.279 16:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.279 { 00:15:49.279 "cntlid": 97, 00:15:49.279 "qid": 0, 00:15:49.279 "state": "enabled", 00:15:49.279 "thread": "nvmf_tgt_poll_group_000", 00:15:49.279 "listen_address": { 00:15:49.279 "trtype": "TCP", 00:15:49.279 "adrfam": "IPv4", 00:15:49.279 "traddr": "10.0.0.2", 00:15:49.279 "trsvcid": "4420" 00:15:49.279 }, 00:15:49.279 "peer_address": { 00:15:49.279 "trtype": "TCP", 00:15:49.279 "adrfam": "IPv4", 00:15:49.279 "traddr": "10.0.0.1", 00:15:49.279 "trsvcid": "58912" 00:15:49.279 }, 00:15:49.279 "auth": { 00:15:49.279 "state": "completed", 00:15:49.279 "digest": "sha512", 00:15:49.279 "dhgroup": "null" 00:15:49.279 } 00:15:49.279 } 00:15:49.279 ]' 00:15:49.279 16:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.537 16:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.537 16:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.537 16:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:49.537 16:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.537 16:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.537 16:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.537 16:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.795 16:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:15:50.733 16:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.733 16:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:50.733 16:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.733 16:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.733 16:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.733 16:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.733 16:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.733 16:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.990 16:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:50.990 16:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.990 16:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:50.990 16:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:50.990 16:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:50.990 16:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.990 16:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.990 16:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.990 16:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.990 16:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.990 16:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.990 16:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.247 00:15:51.247 16:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.247 16:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.247 16:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.504 16:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.504 16:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.504 16:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.504 16:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.504 16:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.504 16:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.504 { 00:15:51.504 "cntlid": 99, 00:15:51.504 "qid": 0, 00:15:51.504 "state": "enabled", 00:15:51.504 "thread": "nvmf_tgt_poll_group_000", 00:15:51.504 "listen_address": { 00:15:51.504 "trtype": "TCP", 00:15:51.504 "adrfam": "IPv4", 00:15:51.504 "traddr": "10.0.0.2", 00:15:51.504 "trsvcid": "4420" 00:15:51.504 }, 00:15:51.504 "peer_address": { 00:15:51.504 "trtype": "TCP", 00:15:51.504 "adrfam": "IPv4", 00:15:51.504 "traddr": "10.0.0.1", 00:15:51.504 "trsvcid": "50044" 00:15:51.504 }, 00:15:51.504 "auth": { 00:15:51.504 "state": "completed", 00:15:51.504 "digest": "sha512", 00:15:51.504 "dhgroup": "null" 00:15:51.504 } 00:15:51.504 } 00:15:51.504 ]' 00:15:51.504 16:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.504 16:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.504 16:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.762 16:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:51.762 16:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.762 16:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.762 16:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.762 16:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.019 16:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:15:52.951 16:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.951 16:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:52.951 16:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.951 16:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.951 16:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.951 16:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.951 16:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:52.951 16:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:53.210 16:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:53.210 16:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.210 16:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:53.210 16:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:53.210 16:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:53.210 16:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.210 16:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.210 16:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.210 16:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.210 16:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.210 16:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.210 16:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.467 00:15:53.467 16:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.467 16:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.467 16:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.724 16:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.724 16:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.724 16:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.724 16:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.724 16:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.724 16:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.724 { 00:15:53.724 "cntlid": 101, 00:15:53.724 "qid": 0, 00:15:53.724 "state": "enabled", 00:15:53.724 "thread": "nvmf_tgt_poll_group_000", 00:15:53.724 "listen_address": { 00:15:53.724 "trtype": "TCP", 00:15:53.724 "adrfam": "IPv4", 00:15:53.724 "traddr": "10.0.0.2", 00:15:53.724 "trsvcid": "4420" 00:15:53.724 }, 00:15:53.724 "peer_address": { 00:15:53.724 "trtype": "TCP", 00:15:53.724 "adrfam": "IPv4", 00:15:53.724 "traddr": "10.0.0.1", 00:15:53.724 "trsvcid": "50072" 00:15:53.724 }, 00:15:53.724 "auth": { 00:15:53.724 "state": "completed", 00:15:53.724 "digest": "sha512", 00:15:53.724 "dhgroup": "null" 00:15:53.724 } 00:15:53.724 } 00:15:53.724 ]' 00:15:53.724 16:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.724 16:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.724 16:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.724 16:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:53.724 16:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.724 16:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.724 16:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.724 16:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.982 16:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:15:54.948 16:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.948 16:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:54.948 16:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.948 16:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.948 16:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.948 16:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.948 16:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:54.948 16:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.206 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:55.206 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.206 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:55.206 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:55.206 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:55.206 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.206 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:55.206 16:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.206 16:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.206 16:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.206 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.206 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.464 00:15:55.464 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.464 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.464 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.721 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.721 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.721 16:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.721 16:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.721 16:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.721 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.721 { 00:15:55.721 "cntlid": 103, 00:15:55.721 "qid": 0, 00:15:55.721 "state": "enabled", 00:15:55.721 "thread": "nvmf_tgt_poll_group_000", 00:15:55.721 "listen_address": { 00:15:55.721 "trtype": "TCP", 00:15:55.721 "adrfam": "IPv4", 00:15:55.721 "traddr": "10.0.0.2", 00:15:55.721 "trsvcid": "4420" 00:15:55.721 }, 00:15:55.721 "peer_address": { 00:15:55.721 "trtype": "TCP", 00:15:55.721 "adrfam": "IPv4", 00:15:55.721 "traddr": "10.0.0.1", 00:15:55.721 "trsvcid": "50118" 00:15:55.721 }, 00:15:55.721 "auth": { 00:15:55.721 "state": "completed", 00:15:55.721 "digest": "sha512", 00:15:55.721 "dhgroup": "null" 00:15:55.721 } 00:15:55.721 } 00:15:55.721 ]' 00:15:55.721 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.721 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.721 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.978 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:55.978 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.978 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.978 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.978 16:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.235 16:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:15:57.170 16:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.170 16:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:57.170 16:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.170 16:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.170 16:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.170 16:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.170 16:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.170 16:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.170 16:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.427 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:57.427 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.427 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:57.427 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:57.427 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:57.427 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.427 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.427 16:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.427 16:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.427 16:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.427 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.427 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.684 00:15:57.684 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.684 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.684 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.941 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.941 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.941 16:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.941 16:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.941 16:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.941 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.941 { 00:15:57.941 "cntlid": 105, 00:15:57.941 "qid": 0, 00:15:57.941 "state": "enabled", 00:15:57.941 "thread": "nvmf_tgt_poll_group_000", 00:15:57.941 "listen_address": { 00:15:57.941 "trtype": "TCP", 00:15:57.941 "adrfam": "IPv4", 00:15:57.941 "traddr": "10.0.0.2", 00:15:57.941 "trsvcid": "4420" 00:15:57.941 }, 00:15:57.941 "peer_address": { 00:15:57.941 "trtype": "TCP", 00:15:57.941 "adrfam": "IPv4", 00:15:57.941 "traddr": "10.0.0.1", 00:15:57.941 "trsvcid": "50148" 00:15:57.941 }, 00:15:57.941 "auth": { 00:15:57.941 "state": "completed", 00:15:57.941 "digest": "sha512", 00:15:57.941 "dhgroup": "ffdhe2048" 00:15:57.941 } 00:15:57.941 } 00:15:57.941 ]' 00:15:57.941 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.941 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.941 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.941 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.941 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.941 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.941 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.941 16:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.199 16:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:15:59.131 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.131 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:59.131 16:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.131 16:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.131 16:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.131 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.131 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.131 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.388 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:59.388 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.388 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:59.388 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:59.388 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:59.388 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.388 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.388 16:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.388 16:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.388 16:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.388 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.388 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.953 00:15:59.953 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.953 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.953 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.953 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.953 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.953 16:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.953 16:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.953 16:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.953 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.953 { 00:15:59.953 "cntlid": 107, 00:15:59.953 "qid": 0, 00:15:59.953 "state": "enabled", 00:15:59.953 "thread": "nvmf_tgt_poll_group_000", 00:15:59.953 "listen_address": { 00:15:59.953 "trtype": "TCP", 00:15:59.953 "adrfam": "IPv4", 00:15:59.953 "traddr": "10.0.0.2", 00:15:59.953 "trsvcid": "4420" 00:15:59.953 }, 00:15:59.953 "peer_address": { 00:15:59.953 "trtype": "TCP", 00:15:59.953 "adrfam": "IPv4", 00:15:59.953 "traddr": "10.0.0.1", 00:15:59.953 "trsvcid": "57842" 00:15:59.953 }, 00:15:59.953 "auth": { 00:15:59.953 "state": "completed", 00:15:59.953 "digest": "sha512", 00:15:59.953 "dhgroup": "ffdhe2048" 00:15:59.953 } 00:15:59.953 } 00:15:59.953 ]' 00:15:59.953 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.211 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.211 16:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.211 16:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:00.211 16:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.211 16:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.211 16:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.211 16:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.469 16:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:16:01.402 16:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.402 16:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:01.402 16:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.402 16:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.402 16:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.402 16:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.402 16:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:01.402 16:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:01.660 16:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:01.660 16:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.660 16:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:01.660 16:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:01.660 16:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:01.660 16:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.660 16:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.660 16:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.660 16:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.660 16:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.660 16:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.660 16:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.918 00:16:01.918 16:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.918 16:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.918 16:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.175 16:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.176 16:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.176 16:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.176 16:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.176 16:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.176 16:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.176 { 00:16:02.176 "cntlid": 109, 00:16:02.176 "qid": 0, 00:16:02.176 "state": "enabled", 00:16:02.176 "thread": "nvmf_tgt_poll_group_000", 00:16:02.176 "listen_address": { 00:16:02.176 "trtype": "TCP", 00:16:02.176 "adrfam": "IPv4", 00:16:02.176 "traddr": "10.0.0.2", 00:16:02.176 "trsvcid": "4420" 00:16:02.176 }, 00:16:02.176 "peer_address": { 00:16:02.176 "trtype": "TCP", 00:16:02.176 "adrfam": "IPv4", 00:16:02.176 "traddr": "10.0.0.1", 00:16:02.176 "trsvcid": "57864" 00:16:02.176 }, 00:16:02.176 "auth": { 00:16:02.176 "state": "completed", 00:16:02.176 "digest": "sha512", 00:16:02.176 "dhgroup": "ffdhe2048" 00:16:02.176 } 00:16:02.176 } 00:16:02.176 ]' 00:16:02.176 16:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.434 16:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:02.434 16:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.434 16:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:02.434 16:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.434 16:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.434 16:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.434 16:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.692 16:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:16:03.631 16:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.631 16:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:03.631 16:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.631 16:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.631 16:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.631 16:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.631 16:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:03.631 16:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:03.890 16:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:03.890 16:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.890 16:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:03.890 16:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:03.890 16:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:03.890 16:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.890 16:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:03.890 16:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.890 16:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.890 16:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.890 16:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.890 16:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.148 00:16:04.408 16:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.408 16:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.408 16:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.408 16:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.408 16:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.408 16:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.408 16:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.666 16:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.666 16:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.666 { 00:16:04.666 "cntlid": 111, 00:16:04.666 "qid": 0, 00:16:04.666 "state": "enabled", 00:16:04.666 "thread": "nvmf_tgt_poll_group_000", 00:16:04.666 "listen_address": { 00:16:04.666 "trtype": "TCP", 00:16:04.666 "adrfam": "IPv4", 00:16:04.666 "traddr": "10.0.0.2", 00:16:04.666 "trsvcid": "4420" 00:16:04.666 }, 00:16:04.666 "peer_address": { 00:16:04.666 "trtype": "TCP", 00:16:04.666 "adrfam": "IPv4", 00:16:04.666 "traddr": "10.0.0.1", 00:16:04.666 "trsvcid": "57890" 00:16:04.666 }, 00:16:04.666 "auth": { 00:16:04.666 "state": "completed", 00:16:04.666 "digest": "sha512", 00:16:04.666 "dhgroup": "ffdhe2048" 00:16:04.666 } 00:16:04.666 } 00:16:04.666 ]' 00:16:04.666 16:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.666 16:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.666 16:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.666 16:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:04.666 16:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.666 16:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.666 16:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.666 16:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.924 16:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:16:05.859 16:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.859 16:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:05.859 16:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.859 16:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.859 16:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.859 16:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.859 16:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.859 16:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:05.859 16:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:06.117 16:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:06.117 16:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.117 16:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.117 16:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:06.117 16:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:06.117 16:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.117 16:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.117 16:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.117 16:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.117 16:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.117 16:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.117 16:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.375 00:16:06.375 16:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.375 16:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.375 16:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.633 16:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.633 16:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.633 16:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.633 16:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.633 16:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.633 16:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.633 { 00:16:06.633 "cntlid": 113, 00:16:06.633 "qid": 0, 00:16:06.633 "state": "enabled", 00:16:06.633 "thread": "nvmf_tgt_poll_group_000", 00:16:06.633 "listen_address": { 00:16:06.633 "trtype": "TCP", 00:16:06.633 "adrfam": "IPv4", 00:16:06.633 "traddr": "10.0.0.2", 00:16:06.633 "trsvcid": "4420" 00:16:06.633 }, 00:16:06.633 "peer_address": { 00:16:06.633 "trtype": "TCP", 00:16:06.633 "adrfam": "IPv4", 00:16:06.633 "traddr": "10.0.0.1", 00:16:06.633 "trsvcid": "57918" 00:16:06.633 }, 00:16:06.633 "auth": { 00:16:06.633 "state": "completed", 00:16:06.633 "digest": "sha512", 00:16:06.633 "dhgroup": "ffdhe3072" 00:16:06.633 } 00:16:06.633 } 00:16:06.633 ]' 00:16:06.633 16:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.892 16:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.892 16:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.892 16:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:06.892 16:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.892 16:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.892 16:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.892 16:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.150 16:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:16:08.087 16:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.087 16:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:08.087 16:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.087 16:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.087 16:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.087 16:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.087 16:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:08.087 16:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:08.347 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:08.347 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.347 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.347 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:08.347 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:08.347 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.347 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.347 16:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.347 16:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.347 16:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.347 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.347 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.605 00:16:08.605 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.605 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.605 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.873 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.873 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.873 16:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.873 16:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.873 16:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.873 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.873 { 00:16:08.873 "cntlid": 115, 00:16:08.873 "qid": 0, 00:16:08.873 "state": "enabled", 00:16:08.873 "thread": "nvmf_tgt_poll_group_000", 00:16:08.873 "listen_address": { 00:16:08.873 "trtype": "TCP", 00:16:08.873 "adrfam": "IPv4", 00:16:08.873 "traddr": "10.0.0.2", 00:16:08.873 "trsvcid": "4420" 00:16:08.873 }, 00:16:08.873 "peer_address": { 00:16:08.873 "trtype": "TCP", 00:16:08.873 "adrfam": "IPv4", 00:16:08.873 "traddr": "10.0.0.1", 00:16:08.873 "trsvcid": "57950" 00:16:08.873 }, 00:16:08.873 "auth": { 00:16:08.873 "state": "completed", 00:16:08.873 "digest": "sha512", 00:16:08.873 "dhgroup": "ffdhe3072" 00:16:08.873 } 00:16:08.873 } 00:16:08.873 ]' 00:16:08.873 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.873 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.873 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.873 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:08.873 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.873 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.873 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.873 16:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.132 16:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:16:10.068 16:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.068 16:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:10.068 16:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.068 16:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.068 16:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.068 16:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.068 16:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:10.068 16:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:10.325 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:10.325 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.325 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.325 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:10.325 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:10.325 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.325 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.325 16:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.325 16:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.325 16:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.325 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.325 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.892 00:16:10.892 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.892 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.892 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.892 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.892 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.892 16:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.892 16:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.892 16:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.892 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.892 { 00:16:10.892 "cntlid": 117, 00:16:10.892 "qid": 0, 00:16:10.892 "state": "enabled", 00:16:10.892 "thread": "nvmf_tgt_poll_group_000", 00:16:10.892 "listen_address": { 00:16:10.892 "trtype": "TCP", 00:16:10.893 "adrfam": "IPv4", 00:16:10.893 "traddr": "10.0.0.2", 00:16:10.893 "trsvcid": "4420" 00:16:10.893 }, 00:16:10.893 "peer_address": { 00:16:10.893 "trtype": "TCP", 00:16:10.893 "adrfam": "IPv4", 00:16:10.893 "traddr": "10.0.0.1", 00:16:10.893 "trsvcid": "51010" 00:16:10.893 }, 00:16:10.893 "auth": { 00:16:10.893 "state": "completed", 00:16:10.893 "digest": "sha512", 00:16:10.893 "dhgroup": "ffdhe3072" 00:16:10.893 } 00:16:10.893 } 00:16:10.893 ]' 00:16:10.893 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.186 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.186 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.186 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:11.186 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.186 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.186 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.186 16:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.444 16:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:16:12.376 16:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.376 16:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:12.376 16:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.376 16:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.376 16:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.377 16:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.377 16:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.377 16:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.634 16:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:12.634 16:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.634 16:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:12.634 16:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:12.634 16:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:12.634 16:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.634 16:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:12.634 16:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.634 16:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.634 16:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.634 16:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:12.634 16:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:12.894 00:16:12.894 16:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.894 16:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.894 16:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.153 16:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.153 16:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.153 16:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.153 16:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.153 16:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.153 16:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.153 { 00:16:13.153 "cntlid": 119, 00:16:13.153 "qid": 0, 00:16:13.153 "state": "enabled", 00:16:13.153 "thread": "nvmf_tgt_poll_group_000", 00:16:13.153 "listen_address": { 00:16:13.153 "trtype": "TCP", 00:16:13.153 "adrfam": "IPv4", 00:16:13.153 "traddr": "10.0.0.2", 00:16:13.153 "trsvcid": "4420" 00:16:13.153 }, 00:16:13.153 "peer_address": { 00:16:13.153 "trtype": "TCP", 00:16:13.153 "adrfam": "IPv4", 00:16:13.153 "traddr": "10.0.0.1", 00:16:13.153 "trsvcid": "51026" 00:16:13.153 }, 00:16:13.153 "auth": { 00:16:13.153 "state": "completed", 00:16:13.153 "digest": "sha512", 00:16:13.153 "dhgroup": "ffdhe3072" 00:16:13.153 } 00:16:13.153 } 00:16:13.153 ]' 00:16:13.153 16:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.153 16:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.153 16:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.153 16:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:13.153 16:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.153 16:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.153 16:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.153 16:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.413 16:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:16:14.346 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.346 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:14.346 16:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.346 16:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.346 16:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.346 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.346 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.346 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:14.346 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:14.604 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:14.604 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.604 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:14.604 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:14.604 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:14.604 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.604 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.604 16:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.604 16:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.604 16:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.604 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.604 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.170 00:16:15.170 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.170 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.170 16:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.428 16:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.428 16:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.428 16:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.428 16:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.428 16:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.428 16:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.428 { 00:16:15.428 "cntlid": 121, 00:16:15.428 "qid": 0, 00:16:15.428 "state": "enabled", 00:16:15.428 "thread": "nvmf_tgt_poll_group_000", 00:16:15.428 "listen_address": { 00:16:15.428 "trtype": "TCP", 00:16:15.428 "adrfam": "IPv4", 00:16:15.428 "traddr": "10.0.0.2", 00:16:15.428 "trsvcid": "4420" 00:16:15.428 }, 00:16:15.428 "peer_address": { 00:16:15.428 "trtype": "TCP", 00:16:15.428 "adrfam": "IPv4", 00:16:15.428 "traddr": "10.0.0.1", 00:16:15.428 "trsvcid": "51060" 00:16:15.428 }, 00:16:15.428 "auth": { 00:16:15.428 "state": "completed", 00:16:15.428 "digest": "sha512", 00:16:15.428 "dhgroup": "ffdhe4096" 00:16:15.428 } 00:16:15.428 } 00:16:15.428 ]' 00:16:15.428 16:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.428 16:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.428 16:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.428 16:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:15.428 16:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.428 16:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.428 16:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.428 16:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.686 16:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:16:16.620 16:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.620 16:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:16.620 16:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.620 16:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.620 16:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.620 16:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.620 16:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:16.620 16:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:16.878 16:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:16.878 16:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.878 16:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:16.878 16:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:16.878 16:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:16.878 16:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.878 16:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.878 16:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.878 16:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.878 16:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.878 16:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.878 16:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.135 00:16:17.135 16:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.135 16:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.135 16:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.406 16:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.406 16:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.406 16:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.406 16:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.406 16:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.406 16:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.406 { 00:16:17.406 "cntlid": 123, 00:16:17.406 "qid": 0, 00:16:17.406 "state": "enabled", 00:16:17.406 "thread": "nvmf_tgt_poll_group_000", 00:16:17.406 "listen_address": { 00:16:17.406 "trtype": "TCP", 00:16:17.406 "adrfam": "IPv4", 00:16:17.406 "traddr": "10.0.0.2", 00:16:17.406 "trsvcid": "4420" 00:16:17.406 }, 00:16:17.406 "peer_address": { 00:16:17.406 "trtype": "TCP", 00:16:17.406 "adrfam": "IPv4", 00:16:17.406 "traddr": "10.0.0.1", 00:16:17.406 "trsvcid": "51096" 00:16:17.406 }, 00:16:17.406 "auth": { 00:16:17.406 "state": "completed", 00:16:17.406 "digest": "sha512", 00:16:17.406 "dhgroup": "ffdhe4096" 00:16:17.406 } 00:16:17.406 } 00:16:17.406 ]' 00:16:17.406 16:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.406 16:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.406 16:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.667 16:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:17.667 16:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.667 16:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.667 16:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.667 16:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.923 16:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:16:18.857 16:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.857 16:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:18.857 16:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.857 16:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.857 16:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.857 16:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.858 16:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:18.858 16:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:19.115 16:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:19.115 16:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.115 16:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.115 16:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:19.115 16:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:19.115 16:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.115 16:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.115 16:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.115 16:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.115 16:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.115 16:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.115 16:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.372 00:16:19.372 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.372 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.372 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.630 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.630 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.630 16:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.630 16:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.630 16:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.630 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.630 { 00:16:19.630 "cntlid": 125, 00:16:19.630 "qid": 0, 00:16:19.630 "state": "enabled", 00:16:19.630 "thread": "nvmf_tgt_poll_group_000", 00:16:19.630 "listen_address": { 00:16:19.630 "trtype": "TCP", 00:16:19.630 "adrfam": "IPv4", 00:16:19.630 "traddr": "10.0.0.2", 00:16:19.630 "trsvcid": "4420" 00:16:19.630 }, 00:16:19.630 "peer_address": { 00:16:19.630 "trtype": "TCP", 00:16:19.630 "adrfam": "IPv4", 00:16:19.630 "traddr": "10.0.0.1", 00:16:19.630 "trsvcid": "51124" 00:16:19.630 }, 00:16:19.630 "auth": { 00:16:19.630 "state": "completed", 00:16:19.630 "digest": "sha512", 00:16:19.630 "dhgroup": "ffdhe4096" 00:16:19.630 } 00:16:19.630 } 00:16:19.630 ]' 00:16:19.630 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.888 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.888 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.888 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:19.888 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.888 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.888 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.888 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.147 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:16:21.080 16:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.081 16:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:21.081 16:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.081 16:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.081 16:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.081 16:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.081 16:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:21.081 16:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:21.339 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:21.339 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.339 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:21.339 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:21.339 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:21.339 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.339 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:21.339 16:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.339 16:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.339 16:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.339 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.339 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.597 00:16:21.597 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.597 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.597 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.855 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.855 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.855 16:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.855 16:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.855 16:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.855 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.855 { 00:16:21.855 "cntlid": 127, 00:16:21.855 "qid": 0, 00:16:21.855 "state": "enabled", 00:16:21.855 "thread": "nvmf_tgt_poll_group_000", 00:16:21.855 "listen_address": { 00:16:21.855 "trtype": "TCP", 00:16:21.855 "adrfam": "IPv4", 00:16:21.855 "traddr": "10.0.0.2", 00:16:21.855 "trsvcid": "4420" 00:16:21.855 }, 00:16:21.855 "peer_address": { 00:16:21.855 "trtype": "TCP", 00:16:21.855 "adrfam": "IPv4", 00:16:21.855 "traddr": "10.0.0.1", 00:16:21.855 "trsvcid": "55978" 00:16:21.855 }, 00:16:21.855 "auth": { 00:16:21.855 "state": "completed", 00:16:21.855 "digest": "sha512", 00:16:21.855 "dhgroup": "ffdhe4096" 00:16:21.855 } 00:16:21.855 } 00:16:21.855 ]' 00:16:21.855 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.855 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.855 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.113 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:22.113 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.113 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.113 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.113 16:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.371 16:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.304 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.869 00:16:23.870 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.870 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.870 16:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.127 16:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.128 16:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.128 16:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.128 16:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.128 16:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.128 16:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.128 { 00:16:24.128 "cntlid": 129, 00:16:24.128 "qid": 0, 00:16:24.128 "state": "enabled", 00:16:24.128 "thread": "nvmf_tgt_poll_group_000", 00:16:24.128 "listen_address": { 00:16:24.128 "trtype": "TCP", 00:16:24.128 "adrfam": "IPv4", 00:16:24.128 "traddr": "10.0.0.2", 00:16:24.128 "trsvcid": "4420" 00:16:24.128 }, 00:16:24.128 "peer_address": { 00:16:24.128 "trtype": "TCP", 00:16:24.128 "adrfam": "IPv4", 00:16:24.128 "traddr": "10.0.0.1", 00:16:24.128 "trsvcid": "55998" 00:16:24.128 }, 00:16:24.128 "auth": { 00:16:24.128 "state": "completed", 00:16:24.128 "digest": "sha512", 00:16:24.128 "dhgroup": "ffdhe6144" 00:16:24.128 } 00:16:24.128 } 00:16:24.128 ]' 00:16:24.128 16:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.128 16:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.128 16:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.386 16:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:24.386 16:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.386 16:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.386 16:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.386 16:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.644 16:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.580 16:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.145 00:16:26.145 16:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.145 16:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.145 16:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.402 16:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.402 16:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.402 16:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.402 16:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.402 16:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.402 16:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.402 { 00:16:26.402 "cntlid": 131, 00:16:26.402 "qid": 0, 00:16:26.402 "state": "enabled", 00:16:26.402 "thread": "nvmf_tgt_poll_group_000", 00:16:26.402 "listen_address": { 00:16:26.402 "trtype": "TCP", 00:16:26.402 "adrfam": "IPv4", 00:16:26.402 "traddr": "10.0.0.2", 00:16:26.402 "trsvcid": "4420" 00:16:26.402 }, 00:16:26.402 "peer_address": { 00:16:26.402 "trtype": "TCP", 00:16:26.402 "adrfam": "IPv4", 00:16:26.402 "traddr": "10.0.0.1", 00:16:26.402 "trsvcid": "56024" 00:16:26.402 }, 00:16:26.402 "auth": { 00:16:26.402 "state": "completed", 00:16:26.402 "digest": "sha512", 00:16:26.402 "dhgroup": "ffdhe6144" 00:16:26.403 } 00:16:26.403 } 00:16:26.403 ]' 00:16:26.403 16:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.660 16:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.660 16:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.660 16:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:26.660 16:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.660 16:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.660 16:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.660 16:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.917 16:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.893 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.460 00:16:28.460 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.460 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.460 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.718 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.718 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.718 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.718 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.718 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.718 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.718 { 00:16:28.718 "cntlid": 133, 00:16:28.718 "qid": 0, 00:16:28.718 "state": "enabled", 00:16:28.718 "thread": "nvmf_tgt_poll_group_000", 00:16:28.718 "listen_address": { 00:16:28.718 "trtype": "TCP", 00:16:28.718 "adrfam": "IPv4", 00:16:28.718 "traddr": "10.0.0.2", 00:16:28.718 "trsvcid": "4420" 00:16:28.718 }, 00:16:28.718 "peer_address": { 00:16:28.718 "trtype": "TCP", 00:16:28.718 "adrfam": "IPv4", 00:16:28.718 "traddr": "10.0.0.1", 00:16:28.718 "trsvcid": "56050" 00:16:28.718 }, 00:16:28.718 "auth": { 00:16:28.718 "state": "completed", 00:16:28.718 "digest": "sha512", 00:16:28.718 "dhgroup": "ffdhe6144" 00:16:28.718 } 00:16:28.718 } 00:16:28.718 ]' 00:16:28.718 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.718 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.718 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.976 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:28.976 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.976 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.976 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.976 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.234 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:16:30.171 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.171 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:30.171 16:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.171 16:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.171 16:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.171 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.171 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.171 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.429 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:30.429 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.429 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:30.429 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:30.429 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:30.429 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.429 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:30.429 16:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.429 16:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.429 16:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.429 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:30.429 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:30.995 00:16:30.995 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.995 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.995 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.252 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.252 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.252 16:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.252 16:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.252 16:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.252 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.252 { 00:16:31.252 "cntlid": 135, 00:16:31.252 "qid": 0, 00:16:31.252 "state": "enabled", 00:16:31.252 "thread": "nvmf_tgt_poll_group_000", 00:16:31.253 "listen_address": { 00:16:31.253 "trtype": "TCP", 00:16:31.253 "adrfam": "IPv4", 00:16:31.253 "traddr": "10.0.0.2", 00:16:31.253 "trsvcid": "4420" 00:16:31.253 }, 00:16:31.253 "peer_address": { 00:16:31.253 "trtype": "TCP", 00:16:31.253 "adrfam": "IPv4", 00:16:31.253 "traddr": "10.0.0.1", 00:16:31.253 "trsvcid": "51272" 00:16:31.253 }, 00:16:31.253 "auth": { 00:16:31.253 "state": "completed", 00:16:31.253 "digest": "sha512", 00:16:31.253 "dhgroup": "ffdhe6144" 00:16:31.253 } 00:16:31.253 } 00:16:31.253 ]' 00:16:31.253 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.253 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.253 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.253 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:31.253 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.253 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.253 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.253 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.512 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:16:32.446 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.446 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:32.446 16:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.446 16:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.446 16:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.446 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.446 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.446 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:32.446 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:32.702 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:32.702 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.702 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:32.702 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:32.702 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:32.702 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.702 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.702 16:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.702 16:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.702 16:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.702 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.702 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.634 00:16:33.634 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.634 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.634 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.891 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.891 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.891 16:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.891 16:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.891 16:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.891 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.891 { 00:16:33.891 "cntlid": 137, 00:16:33.891 "qid": 0, 00:16:33.892 "state": "enabled", 00:16:33.892 "thread": "nvmf_tgt_poll_group_000", 00:16:33.892 "listen_address": { 00:16:33.892 "trtype": "TCP", 00:16:33.892 "adrfam": "IPv4", 00:16:33.892 "traddr": "10.0.0.2", 00:16:33.892 "trsvcid": "4420" 00:16:33.892 }, 00:16:33.892 "peer_address": { 00:16:33.892 "trtype": "TCP", 00:16:33.892 "adrfam": "IPv4", 00:16:33.892 "traddr": "10.0.0.1", 00:16:33.892 "trsvcid": "51296" 00:16:33.892 }, 00:16:33.892 "auth": { 00:16:33.892 "state": "completed", 00:16:33.892 "digest": "sha512", 00:16:33.892 "dhgroup": "ffdhe8192" 00:16:33.892 } 00:16:33.892 } 00:16:33.892 ]' 00:16:33.892 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.892 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.892 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.892 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:33.892 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.892 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.892 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.892 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.149 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:16:35.081 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.081 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:35.081 16:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.081 16:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.081 16:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.081 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.081 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:35.081 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:35.339 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:35.339 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.339 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.339 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:35.339 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:35.339 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.339 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.339 16:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.339 16:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.339 16:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.339 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.339 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.272 00:16:36.272 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.272 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.272 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.530 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.530 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.530 16:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.530 16:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.530 16:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.530 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.530 { 00:16:36.530 "cntlid": 139, 00:16:36.530 "qid": 0, 00:16:36.530 "state": "enabled", 00:16:36.530 "thread": "nvmf_tgt_poll_group_000", 00:16:36.530 "listen_address": { 00:16:36.530 "trtype": "TCP", 00:16:36.530 "adrfam": "IPv4", 00:16:36.530 "traddr": "10.0.0.2", 00:16:36.530 "trsvcid": "4420" 00:16:36.530 }, 00:16:36.530 "peer_address": { 00:16:36.530 "trtype": "TCP", 00:16:36.530 "adrfam": "IPv4", 00:16:36.530 "traddr": "10.0.0.1", 00:16:36.530 "trsvcid": "51320" 00:16:36.530 }, 00:16:36.530 "auth": { 00:16:36.530 "state": "completed", 00:16:36.530 "digest": "sha512", 00:16:36.530 "dhgroup": "ffdhe8192" 00:16:36.531 } 00:16:36.531 } 00:16:36.531 ]' 00:16:36.531 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.531 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.531 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.531 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:36.531 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.531 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.531 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.531 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.788 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NGY2ZmRiZGY1ZGUzZTNmYTlhMmNkMzU1NDkyZGQ0MjBKnfaD: --dhchap-ctrl-secret DHHC-1:02:MzY5ZTBjYjk1MzA3OWVhMzVlZTI3YWQ5YjFhZjRjYjFjOTY1ZGIwY2I1OWMxZDI5fBcI+A==: 00:16:37.724 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.724 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:37.724 16:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.724 16:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.724 16:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.724 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.724 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:37.724 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:37.982 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:37.982 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.982 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:37.982 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:37.982 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:37.982 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.982 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.982 16:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.982 16:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.982 16:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.982 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.982 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.920 00:16:38.920 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.920 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.920 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.179 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.179 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.179 16:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.179 16:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.179 16:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.179 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.179 { 00:16:39.179 "cntlid": 141, 00:16:39.179 "qid": 0, 00:16:39.179 "state": "enabled", 00:16:39.179 "thread": "nvmf_tgt_poll_group_000", 00:16:39.179 "listen_address": { 00:16:39.179 "trtype": "TCP", 00:16:39.179 "adrfam": "IPv4", 00:16:39.179 "traddr": "10.0.0.2", 00:16:39.179 "trsvcid": "4420" 00:16:39.179 }, 00:16:39.179 "peer_address": { 00:16:39.179 "trtype": "TCP", 00:16:39.179 "adrfam": "IPv4", 00:16:39.179 "traddr": "10.0.0.1", 00:16:39.179 "trsvcid": "51346" 00:16:39.179 }, 00:16:39.179 "auth": { 00:16:39.179 "state": "completed", 00:16:39.179 "digest": "sha512", 00:16:39.179 "dhgroup": "ffdhe8192" 00:16:39.179 } 00:16:39.179 } 00:16:39.179 ]' 00:16:39.179 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.179 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.179 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.179 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:39.179 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.179 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.179 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.179 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.439 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:OTk5ZWFkMDQ3MjBlMDg0MTk0MDVlMDgyMTA5MjAwYzUyNzhmZWYyY2IxYmU3ZWY1uyk/QA==: --dhchap-ctrl-secret DHHC-1:01:MzMyNjkyOGQ1ZjI2MWJkYmVhM2I3MTExMmNhNWYzYjXfq1YO: 00:16:40.375 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.375 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:40.375 16:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.375 16:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.375 16:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.375 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.375 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:40.375 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:40.632 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:40.632 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.632 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:40.632 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:40.632 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:40.632 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.632 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:40.632 16:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.632 16:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.632 16:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.632 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.632 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.567 00:16:41.567 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.567 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.567 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.825 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.825 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.825 16:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.825 16:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.825 16:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.825 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.825 { 00:16:41.825 "cntlid": 143, 00:16:41.825 "qid": 0, 00:16:41.825 "state": "enabled", 00:16:41.825 "thread": "nvmf_tgt_poll_group_000", 00:16:41.825 "listen_address": { 00:16:41.825 "trtype": "TCP", 00:16:41.825 "adrfam": "IPv4", 00:16:41.825 "traddr": "10.0.0.2", 00:16:41.825 "trsvcid": "4420" 00:16:41.825 }, 00:16:41.825 "peer_address": { 00:16:41.825 "trtype": "TCP", 00:16:41.825 "adrfam": "IPv4", 00:16:41.825 "traddr": "10.0.0.1", 00:16:41.825 "trsvcid": "52986" 00:16:41.825 }, 00:16:41.825 "auth": { 00:16:41.825 "state": "completed", 00:16:41.825 "digest": "sha512", 00:16:41.825 "dhgroup": "ffdhe8192" 00:16:41.825 } 00:16:41.825 } 00:16:41.825 ]' 00:16:41.825 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.825 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.825 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.825 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.825 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.825 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.825 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.825 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.083 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:16:43.019 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.019 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:43.019 16:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.019 16:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.019 16:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.019 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:43.019 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:43.019 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:43.019 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:43.019 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:43.019 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:43.277 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:43.278 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.278 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:43.278 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:43.278 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:43.278 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.278 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.278 16:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.278 16:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.278 16:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.278 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.278 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.251 00:16:44.251 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.251 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.251 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.508 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.508 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.508 16:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.508 16:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.508 16:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.508 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.508 { 00:16:44.508 "cntlid": 145, 00:16:44.508 "qid": 0, 00:16:44.508 "state": "enabled", 00:16:44.508 "thread": "nvmf_tgt_poll_group_000", 00:16:44.508 "listen_address": { 00:16:44.508 "trtype": "TCP", 00:16:44.508 "adrfam": "IPv4", 00:16:44.508 "traddr": "10.0.0.2", 00:16:44.508 "trsvcid": "4420" 00:16:44.508 }, 00:16:44.508 "peer_address": { 00:16:44.508 "trtype": "TCP", 00:16:44.508 "adrfam": "IPv4", 00:16:44.508 "traddr": "10.0.0.1", 00:16:44.508 "trsvcid": "53014" 00:16:44.508 }, 00:16:44.508 "auth": { 00:16:44.508 "state": "completed", 00:16:44.508 "digest": "sha512", 00:16:44.508 "dhgroup": "ffdhe8192" 00:16:44.508 } 00:16:44.508 } 00:16:44.508 ]' 00:16:44.508 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.508 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.508 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.508 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:44.508 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.508 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.508 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.508 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.768 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2RjODY4MjBhZGI5ZTAwNzA2ZDk4NDNmYzhiNTVjZGZjNWYyZDIzMjQyMzcwODg4kXN76Q==: --dhchap-ctrl-secret DHHC-1:03:YTRiNmUzYWE0ODg0YmExNWMwYTQ3MWRlZGVkMzMxOTA4MWQzOWU1NGRlN2M3YzBkMWI3Mzc0MWEyOWU4YjRjMZs4frs=: 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:45.706 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:46.642 request: 00:16:46.642 { 00:16:46.642 "name": "nvme0", 00:16:46.642 "trtype": "tcp", 00:16:46.642 "traddr": "10.0.0.2", 00:16:46.642 "adrfam": "ipv4", 00:16:46.643 "trsvcid": "4420", 00:16:46.643 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:46.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:46.643 "prchk_reftag": false, 00:16:46.643 "prchk_guard": false, 00:16:46.643 "hdgst": false, 00:16:46.643 "ddgst": false, 00:16:46.643 "dhchap_key": "key2", 00:16:46.643 "method": "bdev_nvme_attach_controller", 00:16:46.643 "req_id": 1 00:16:46.643 } 00:16:46.643 Got JSON-RPC error response 00:16:46.643 response: 00:16:46.643 { 00:16:46.643 "code": -5, 00:16:46.643 "message": "Input/output error" 00:16:46.643 } 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:46.643 16:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:47.211 request: 00:16:47.211 { 00:16:47.211 "name": "nvme0", 00:16:47.211 "trtype": "tcp", 00:16:47.211 "traddr": "10.0.0.2", 00:16:47.211 "adrfam": "ipv4", 00:16:47.211 "trsvcid": "4420", 00:16:47.211 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:47.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:47.211 "prchk_reftag": false, 00:16:47.211 "prchk_guard": false, 00:16:47.211 "hdgst": false, 00:16:47.211 "ddgst": false, 00:16:47.211 "dhchap_key": "key1", 00:16:47.211 "dhchap_ctrlr_key": "ckey2", 00:16:47.211 "method": "bdev_nvme_attach_controller", 00:16:47.211 "req_id": 1 00:16:47.211 } 00:16:47.211 Got JSON-RPC error response 00:16:47.211 response: 00:16:47.211 { 00:16:47.211 "code": -5, 00:16:47.211 "message": "Input/output error" 00:16:47.211 } 00:16:47.211 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:47.211 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:47.211 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:47.211 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:47.212 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:47.212 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.212 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.212 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.212 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:16:47.212 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.212 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.212 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.212 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.212 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:47.212 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.212 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:47.212 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:47.212 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:47.212 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:47.212 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.212 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.149 request: 00:16:48.149 { 00:16:48.149 "name": "nvme0", 00:16:48.149 "trtype": "tcp", 00:16:48.149 "traddr": "10.0.0.2", 00:16:48.149 "adrfam": "ipv4", 00:16:48.149 "trsvcid": "4420", 00:16:48.149 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:48.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:48.149 "prchk_reftag": false, 00:16:48.149 "prchk_guard": false, 00:16:48.149 "hdgst": false, 00:16:48.149 "ddgst": false, 00:16:48.149 "dhchap_key": "key1", 00:16:48.149 "dhchap_ctrlr_key": "ckey1", 00:16:48.149 "method": "bdev_nvme_attach_controller", 00:16:48.149 "req_id": 1 00:16:48.149 } 00:16:48.149 Got JSON-RPC error response 00:16:48.149 response: 00:16:48.149 { 00:16:48.149 "code": -5, 00:16:48.149 "message": "Input/output error" 00:16:48.149 } 00:16:48.149 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:48.149 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:48.149 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:48.149 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:48.149 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:48.149 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.149 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.149 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.149 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 775038 00:16:48.149 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 775038 ']' 00:16:48.149 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 775038 00:16:48.149 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:48.149 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:48.149 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 775038 00:16:48.149 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:48.149 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:48.149 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 775038' 00:16:48.149 killing process with pid 775038 00:16:48.149 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 775038 00:16:48.149 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 775038 00:16:48.408 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:48.408 16:09:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:48.408 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:48.408 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.408 16:09:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=796928 00:16:48.408 16:09:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:48.408 16:09:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 796928 00:16:48.408 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 796928 ']' 00:16:48.408 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.408 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.408 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.408 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.408 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.665 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.665 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:48.665 16:09:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:48.665 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:48.665 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.665 16:09:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.665 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:48.665 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 796928 00:16:48.665 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 796928 ']' 00:16:48.665 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.665 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.665 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.665 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.665 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.924 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.924 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:48.924 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:48.924 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.924 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.182 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.182 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:49.182 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.182 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:49.182 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:49.182 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:49.182 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.182 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:49.182 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.182 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.182 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.183 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.183 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.117 00:16:50.117 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.117 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.117 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.117 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.117 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.117 16:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.117 16:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.117 16:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.117 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.117 { 00:16:50.117 "cntlid": 1, 00:16:50.117 "qid": 0, 00:16:50.117 "state": "enabled", 00:16:50.117 "thread": "nvmf_tgt_poll_group_000", 00:16:50.117 "listen_address": { 00:16:50.117 "trtype": "TCP", 00:16:50.117 "adrfam": "IPv4", 00:16:50.117 "traddr": "10.0.0.2", 00:16:50.117 "trsvcid": "4420" 00:16:50.117 }, 00:16:50.117 "peer_address": { 00:16:50.117 "trtype": "TCP", 00:16:50.117 "adrfam": "IPv4", 00:16:50.117 "traddr": "10.0.0.1", 00:16:50.117 "trsvcid": "36118" 00:16:50.117 }, 00:16:50.117 "auth": { 00:16:50.117 "state": "completed", 00:16:50.117 "digest": "sha512", 00:16:50.117 "dhgroup": "ffdhe8192" 00:16:50.117 } 00:16:50.117 } 00:16:50.117 ]' 00:16:50.117 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.409 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.409 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.409 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:50.409 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.409 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.409 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.409 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.666 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NzZhYTE3NjFjZWQ1NzU5MTRiNGEyNWNhNjRjNzFjNjZjMWI0NzJiZWI5YzcyNzU4Nzk5Mjc2YzNkMTlkNzdjNqUMKwY=: 00:16:51.600 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.600 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:51.600 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.600 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.600 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.600 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:51.600 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.600 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.600 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.600 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:51.600 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:51.858 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.858 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:51.858 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.858 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:51.858 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:51.858 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:51.858 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:51.858 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.858 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.116 request: 00:16:52.116 { 00:16:52.116 "name": "nvme0", 00:16:52.116 "trtype": "tcp", 00:16:52.116 "traddr": "10.0.0.2", 00:16:52.116 "adrfam": "ipv4", 00:16:52.116 "trsvcid": "4420", 00:16:52.116 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:52.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:52.116 "prchk_reftag": false, 00:16:52.116 "prchk_guard": false, 00:16:52.116 "hdgst": false, 00:16:52.116 "ddgst": false, 00:16:52.116 "dhchap_key": "key3", 00:16:52.116 "method": "bdev_nvme_attach_controller", 00:16:52.116 "req_id": 1 00:16:52.116 } 00:16:52.116 Got JSON-RPC error response 00:16:52.116 response: 00:16:52.116 { 00:16:52.116 "code": -5, 00:16:52.116 "message": "Input/output error" 00:16:52.116 } 00:16:52.116 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:52.116 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:52.116 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:52.116 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:52.116 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:16:52.116 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:16:52.116 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:52.116 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:52.375 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.375 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:52.375 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.375 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:52.375 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:52.375 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:52.375 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:52.375 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.375 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.375 request: 00:16:52.375 { 00:16:52.375 "name": "nvme0", 00:16:52.375 "trtype": "tcp", 00:16:52.375 "traddr": "10.0.0.2", 00:16:52.375 "adrfam": "ipv4", 00:16:52.375 "trsvcid": "4420", 00:16:52.375 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:52.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:52.375 "prchk_reftag": false, 00:16:52.375 "prchk_guard": false, 00:16:52.375 "hdgst": false, 00:16:52.375 "ddgst": false, 00:16:52.375 "dhchap_key": "key3", 00:16:52.375 "method": "bdev_nvme_attach_controller", 00:16:52.375 "req_id": 1 00:16:52.375 } 00:16:52.375 Got JSON-RPC error response 00:16:52.375 response: 00:16:52.375 { 00:16:52.375 "code": -5, 00:16:52.375 "message": "Input/output error" 00:16:52.375 } 00:16:52.375 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:52.375 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:52.375 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:52.375 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:52.635 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:52.635 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:16:52.635 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:52.635 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:52.635 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:52.635 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:52.635 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:52.635 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.635 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.635 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.635 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:52.635 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.635 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.635 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.635 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:52.635 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:52.635 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:52.895 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:52.895 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:52.895 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:52.895 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:52.895 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:52.895 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:53.155 request: 00:16:53.155 { 00:16:53.155 "name": "nvme0", 00:16:53.155 "trtype": "tcp", 00:16:53.155 "traddr": "10.0.0.2", 00:16:53.155 "adrfam": "ipv4", 00:16:53.155 "trsvcid": "4420", 00:16:53.155 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:53.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:53.155 "prchk_reftag": false, 00:16:53.155 "prchk_guard": false, 00:16:53.155 "hdgst": false, 00:16:53.155 "ddgst": false, 00:16:53.155 "dhchap_key": "key0", 00:16:53.155 "dhchap_ctrlr_key": "key1", 00:16:53.155 "method": "bdev_nvme_attach_controller", 00:16:53.155 "req_id": 1 00:16:53.155 } 00:16:53.155 Got JSON-RPC error response 00:16:53.155 response: 00:16:53.155 { 00:16:53.155 "code": -5, 00:16:53.155 "message": "Input/output error" 00:16:53.155 } 00:16:53.155 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:53.155 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:53.155 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:53.155 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:53.155 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:53.155 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:53.414 00:16:53.414 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:16:53.414 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:16:53.414 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.672 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.672 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.672 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.954 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:16:53.954 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:16:53.954 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 775057 00:16:53.954 16:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 775057 ']' 00:16:53.954 16:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 775057 00:16:53.954 16:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:53.954 16:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:53.954 16:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 775057 00:16:53.954 16:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:53.954 16:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:53.954 16:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 775057' 00:16:53.954 killing process with pid 775057 00:16:53.954 16:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 775057 00:16:53.954 16:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 775057 00:16:54.213 16:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:54.213 16:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:54.213 16:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:16:54.213 16:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:54.213 16:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:16:54.213 16:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:54.213 16:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:54.213 rmmod nvme_tcp 00:16:54.213 rmmod nvme_fabrics 00:16:54.213 rmmod nvme_keyring 00:16:54.213 16:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:54.213 16:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:16:54.213 16:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:16:54.213 16:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 796928 ']' 00:16:54.213 16:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 796928 00:16:54.213 16:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 796928 ']' 00:16:54.213 16:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 796928 00:16:54.213 16:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:54.213 16:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:54.213 16:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 796928 00:16:54.473 16:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:54.473 16:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:54.473 16:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 796928' 00:16:54.473 killing process with pid 796928 00:16:54.473 16:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 796928 00:16:54.473 16:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 796928 00:16:54.733 16:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:54.733 16:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:54.733 16:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:54.733 16:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:54.733 16:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:54.733 16:09:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.733 16:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.733 16:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.638 16:09:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:56.638 16:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.4Cq /tmp/spdk.key-sha256.5Jg /tmp/spdk.key-sha384.uPX /tmp/spdk.key-sha512.oKy /tmp/spdk.key-sha512.zQY /tmp/spdk.key-sha384.Hjs /tmp/spdk.key-sha256.nq6 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:16:56.638 00:16:56.638 real 3m1.803s 00:16:56.638 user 7m5.963s 00:16:56.638 sys 0m25.064s 00:16:56.638 16:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:56.638 16:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.638 ************************************ 00:16:56.638 END TEST nvmf_auth_target 00:16:56.638 ************************************ 00:16:56.638 16:09:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:56.638 16:09:42 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:16:56.638 16:09:42 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:56.638 16:09:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:56.638 16:09:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:56.638 16:09:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:56.638 ************************************ 00:16:56.638 START TEST nvmf_bdevio_no_huge 00:16:56.638 ************************************ 00:16:56.638 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:56.638 * Looking for test storage... 00:16:56.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.638 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.638 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:56.638 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.638 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.638 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.638 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.638 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.638 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.638 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.638 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.638 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.638 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.896 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:16:56.897 16:09:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:58.800 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:58.800 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:58.800 Found net devices under 0000:09:00.0: cvl_0_0 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.800 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:58.801 Found net devices under 0000:09:00.1: cvl_0_1 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.801 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:59.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:16:59.060 00:16:59.060 --- 10.0.0.2 ping statistics --- 00:16:59.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.060 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:59.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:16:59.060 00:16:59.060 --- 10.0.0.1 ping statistics --- 00:16:59.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.060 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=799655 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 799655 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 799655 ']' 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:59.060 16:09:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:59.060 [2024-07-15 16:09:44.937734] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:16:59.060 [2024-07-15 16:09:44.937827] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:59.060 [2024-07-15 16:09:45.011687] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:59.318 [2024-07-15 16:09:45.119616] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.318 [2024-07-15 16:09:45.119681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.318 [2024-07-15 16:09:45.119695] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.318 [2024-07-15 16:09:45.119706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.318 [2024-07-15 16:09:45.119716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.318 [2024-07-15 16:09:45.119805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:59.318 [2024-07-15 16:09:45.119867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:59.318 [2024-07-15 16:09:45.119916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:59.318 [2024-07-15 16:09:45.119918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:59.318 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:59.318 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:16:59.318 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:59.318 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:59.318 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:59.318 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.318 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:59.318 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.318 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:59.318 [2024-07-15 16:09:45.243215] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.318 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.318 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:59.318 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.318 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:59.318 Malloc0 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:59.319 [2024-07-15 16:09:45.281568] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.319 { 00:16:59.319 "params": { 00:16:59.319 "name": "Nvme$subsystem", 00:16:59.319 "trtype": "$TEST_TRANSPORT", 00:16:59.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.319 "adrfam": "ipv4", 00:16:59.319 "trsvcid": "$NVMF_PORT", 00:16:59.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.319 "hdgst": ${hdgst:-false}, 00:16:59.319 "ddgst": ${ddgst:-false} 00:16:59.319 }, 00:16:59.319 "method": "bdev_nvme_attach_controller" 00:16:59.319 } 00:16:59.319 EOF 00:16:59.319 )") 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:16:59.319 16:09:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:59.319 "params": { 00:16:59.319 "name": "Nvme1", 00:16:59.319 "trtype": "tcp", 00:16:59.319 "traddr": "10.0.0.2", 00:16:59.319 "adrfam": "ipv4", 00:16:59.319 "trsvcid": "4420", 00:16:59.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:59.319 "hdgst": false, 00:16:59.319 "ddgst": false 00:16:59.319 }, 00:16:59.319 "method": "bdev_nvme_attach_controller" 00:16:59.319 }' 00:16:59.576 [2024-07-15 16:09:45.330420] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:16:59.576 [2024-07-15 16:09:45.330498] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid799684 ] 00:16:59.576 [2024-07-15 16:09:45.393387] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:59.576 [2024-07-15 16:09:45.508266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.576 [2024-07-15 16:09:45.508322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.576 [2024-07-15 16:09:45.508326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.835 I/O targets: 00:16:59.835 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:59.835 00:16:59.835 00:16:59.835 CUnit - A unit testing framework for C - Version 2.1-3 00:16:59.835 http://cunit.sourceforge.net/ 00:16:59.835 00:16:59.835 00:16:59.835 Suite: bdevio tests on: Nvme1n1 00:16:59.835 Test: blockdev write read block ...passed 00:16:59.835 Test: blockdev write zeroes read block ...passed 00:16:59.835 Test: blockdev write zeroes read no split ...passed 00:17:00.093 Test: blockdev write zeroes read split ...passed 00:17:00.093 Test: blockdev write zeroes read split partial ...passed 00:17:00.093 Test: blockdev reset ...[2024-07-15 16:09:45.907360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:00.093 [2024-07-15 16:09:45.907472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149afb0 (9): Bad file descriptor 00:17:00.093 [2024-07-15 16:09:45.925645] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:00.093 passed 00:17:00.093 Test: blockdev write read 8 blocks ...passed 00:17:00.093 Test: blockdev write read size > 128k ...passed 00:17:00.093 Test: blockdev write read invalid size ...passed 00:17:00.093 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:00.093 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:00.093 Test: blockdev write read max offset ...passed 00:17:00.093 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:00.351 Test: blockdev writev readv 8 blocks ...passed 00:17:00.351 Test: blockdev writev readv 30 x 1block ...passed 00:17:00.351 Test: blockdev writev readv block ...passed 00:17:00.351 Test: blockdev writev readv size > 128k ...passed 00:17:00.351 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:00.351 Test: blockdev comparev and writev ...[2024-07-15 16:09:46.220179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:00.351 [2024-07-15 16:09:46.220214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.351 [2024-07-15 16:09:46.220238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:00.351 [2024-07-15 16:09:46.220262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:00.351 [2024-07-15 16:09:46.220599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:00.351 [2024-07-15 16:09:46.220625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:00.351 [2024-07-15 16:09:46.220647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:00.351 [2024-07-15 16:09:46.220663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:00.351 [2024-07-15 16:09:46.220981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:00.351 [2024-07-15 16:09:46.221006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:00.351 [2024-07-15 16:09:46.221029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:00.351 [2024-07-15 16:09:46.221045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:00.351 [2024-07-15 16:09:46.221381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:00.351 [2024-07-15 16:09:46.221405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:00.351 [2024-07-15 16:09:46.221427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:00.351 [2024-07-15 16:09:46.221443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:00.351 passed 00:17:00.351 Test: blockdev nvme passthru rw ...passed 00:17:00.351 Test: blockdev nvme passthru vendor specific ...[2024-07-15 16:09:46.305195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:00.351 [2024-07-15 16:09:46.305222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:00.351 [2024-07-15 16:09:46.305375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:00.351 [2024-07-15 16:09:46.305397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:00.351 [2024-07-15 16:09:46.305529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:00.351 [2024-07-15 16:09:46.305552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:00.351 [2024-07-15 16:09:46.305696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:00.351 [2024-07-15 16:09:46.305719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:00.351 passed 00:17:00.351 Test: blockdev nvme admin passthru ...passed 00:17:00.610 Test: blockdev copy ...passed 00:17:00.610 00:17:00.610 Run Summary: Type Total Ran Passed Failed Inactive 00:17:00.610 suites 1 1 n/a 0 0 00:17:00.610 tests 23 23 23 0 0 00:17:00.610 asserts 152 152 152 0 n/a 00:17:00.610 00:17:00.610 Elapsed time = 1.303 seconds 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:00.870 rmmod nvme_tcp 00:17:00.870 rmmod nvme_fabrics 00:17:00.870 rmmod nvme_keyring 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 799655 ']' 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 799655 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 799655 ']' 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 799655 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 799655 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 799655' 00:17:00.870 killing process with pid 799655 00:17:00.870 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 799655 00:17:00.871 16:09:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 799655 00:17:01.472 16:09:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:01.472 16:09:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:01.472 16:09:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:01.472 16:09:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.472 16:09:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:01.472 16:09:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.472 16:09:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.472 16:09:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.377 16:09:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:03.377 00:17:03.377 real 0m6.684s 00:17:03.377 user 0m11.124s 00:17:03.377 sys 0m2.516s 00:17:03.377 16:09:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:03.377 16:09:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:03.377 ************************************ 00:17:03.377 END TEST nvmf_bdevio_no_huge 00:17:03.377 ************************************ 00:17:03.377 16:09:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:03.377 16:09:49 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:03.377 16:09:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:03.377 16:09:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:03.377 16:09:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:03.377 ************************************ 00:17:03.377 START TEST nvmf_tls 00:17:03.377 ************************************ 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:03.377 * Looking for test storage... 00:17:03.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:03.377 16:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:05.907 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:05.908 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:05.908 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:05.908 Found net devices under 0000:09:00.0: cvl_0_0 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:05.908 Found net devices under 0000:09:00.1: cvl_0_1 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:05.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:17:05.908 00:17:05.908 --- 10.0.0.2 ping statistics --- 00:17:05.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.908 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:17:05.908 00:17:05.908 --- 10.0.0.1 ping statistics --- 00:17:05.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.908 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=801887 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 801887 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 801887 ']' 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.908 [2024-07-15 16:09:51.557571] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:17:05.908 [2024-07-15 16:09:51.557662] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.908 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.908 [2024-07-15 16:09:51.622026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.908 [2024-07-15 16:09:51.725215] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.908 [2024-07-15 16:09:51.725285] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.908 [2024-07-15 16:09:51.725306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.908 [2024-07-15 16:09:51.725317] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.908 [2024-07-15 16:09:51.725326] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.908 [2024-07-15 16:09:51.725369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:05.908 16:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:06.164 true 00:17:06.164 16:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:06.164 16:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:06.422 16:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:06.422 16:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:06.422 16:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:06.680 16:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:06.680 16:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:06.937 16:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:06.938 16:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:06.938 16:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:07.197 16:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:07.197 16:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:07.456 16:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:07.456 16:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:07.456 16:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:07.456 16:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:07.715 16:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:07.715 16:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:07.715 16:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:07.974 16:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:07.974 16:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:08.232 16:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:08.232 16:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:08.232 16:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:08.490 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:08.490 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.RnLjDCGy4p 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.ertwfc4j0b 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.RnLjDCGy4p 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ertwfc4j0b 00:17:08.748 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:09.006 16:09:54 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:09.571 16:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.RnLjDCGy4p 00:17:09.571 16:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.RnLjDCGy4p 00:17:09.571 16:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:09.827 [2024-07-15 16:09:55.576685] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.827 16:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:10.084 16:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:10.341 [2024-07-15 16:09:56.150269] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:10.341 [2024-07-15 16:09:56.150514] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.341 16:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:10.597 malloc0 00:17:10.597 16:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:10.854 16:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RnLjDCGy4p 00:17:11.112 [2024-07-15 16:09:56.982506] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:11.112 16:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.RnLjDCGy4p 00:17:11.112 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.105 Initializing NVMe Controllers 00:17:21.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:21.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:21.105 Initialization complete. Launching workers. 00:17:21.105 ======================================================== 00:17:21.105 Latency(us) 00:17:21.105 Device Information : IOPS MiB/s Average min max 00:17:21.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8777.50 34.29 7293.34 1194.95 8511.13 00:17:21.105 ======================================================== 00:17:21.105 Total : 8777.50 34.29 7293.34 1194.95 8511.13 00:17:21.105 00:17:21.365 16:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RnLjDCGy4p 00:17:21.365 16:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:21.365 16:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:21.365 16:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:21.365 16:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RnLjDCGy4p' 00:17:21.365 16:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:21.365 16:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=803667 00:17:21.365 16:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:21.365 16:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:21.365 16:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 803667 /var/tmp/bdevperf.sock 00:17:21.365 16:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 803667 ']' 00:17:21.365 16:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.365 16:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.365 16:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.365 16:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.365 16:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.365 [2024-07-15 16:10:07.156307] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:17:21.365 [2024-07-15 16:10:07.156378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803667 ] 00:17:21.365 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.365 [2024-07-15 16:10:07.212845] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.365 [2024-07-15 16:10:07.318434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.623 16:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.623 16:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:21.623 16:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RnLjDCGy4p 00:17:21.881 [2024-07-15 16:10:07.700499] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:21.881 [2024-07-15 16:10:07.700628] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:21.881 TLSTESTn1 00:17:21.881 16:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:22.140 Running I/O for 10 seconds... 00:17:32.122 00:17:32.122 Latency(us) 00:17:32.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.122 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:32.123 Verification LBA range: start 0x0 length 0x2000 00:17:32.123 TLSTESTn1 : 10.03 3152.38 12.31 0.00 0.00 40517.99 7718.68 45632.47 00:17:32.123 =================================================================================================================== 00:17:32.123 Total : 3152.38 12.31 0.00 0.00 40517.99 7718.68 45632.47 00:17:32.123 0 00:17:32.123 16:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.123 16:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 803667 00:17:32.123 16:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 803667 ']' 00:17:32.123 16:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 803667 00:17:32.123 16:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:32.123 16:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.123 16:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 803667 00:17:32.123 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:32.123 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:32.123 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 803667' 00:17:32.123 killing process with pid 803667 00:17:32.123 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 803667 00:17:32.123 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.123 00:17:32.123 Latency(us) 00:17:32.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.123 =================================================================================================================== 00:17:32.123 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.123 [2024-07-15 16:10:18.008439] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:32.123 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 803667 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ertwfc4j0b 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ertwfc4j0b 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ertwfc4j0b 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ertwfc4j0b' 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=804981 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 804981 /var/tmp/bdevperf.sock 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 804981 ']' 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.381 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.381 [2024-07-15 16:10:18.323195] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:17:32.381 [2024-07-15 16:10:18.323290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid804981 ] 00:17:32.381 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.381 [2024-07-15 16:10:18.382175] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.639 [2024-07-15 16:10:18.491080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.639 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.639 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:32.639 16:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ertwfc4j0b 00:17:32.898 [2024-07-15 16:10:18.835735] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:32.898 [2024-07-15 16:10:18.835860] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:32.898 [2024-07-15 16:10:18.844092] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:32.898 [2024-07-15 16:10:18.844634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1977f90 (107): Transport endpoint is not connected 00:17:32.898 [2024-07-15 16:10:18.845624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1977f90 (9): Bad file descriptor 00:17:32.898 [2024-07-15 16:10:18.846623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:32.898 [2024-07-15 16:10:18.846643] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:32.898 [2024-07-15 16:10:18.846659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:32.898 request: 00:17:32.898 { 00:17:32.898 "name": "TLSTEST", 00:17:32.898 "trtype": "tcp", 00:17:32.898 "traddr": "10.0.0.2", 00:17:32.898 "adrfam": "ipv4", 00:17:32.898 "trsvcid": "4420", 00:17:32.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:32.898 "prchk_reftag": false, 00:17:32.898 "prchk_guard": false, 00:17:32.898 "hdgst": false, 00:17:32.898 "ddgst": false, 00:17:32.898 "psk": "/tmp/tmp.ertwfc4j0b", 00:17:32.898 "method": "bdev_nvme_attach_controller", 00:17:32.898 "req_id": 1 00:17:32.898 } 00:17:32.898 Got JSON-RPC error response 00:17:32.898 response: 00:17:32.898 { 00:17:32.898 "code": -5, 00:17:32.898 "message": "Input/output error" 00:17:32.898 } 00:17:32.898 16:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 804981 00:17:32.898 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 804981 ']' 00:17:32.898 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 804981 00:17:32.898 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:32.898 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.898 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 804981 00:17:32.898 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:32.898 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:32.898 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 804981' 00:17:32.898 killing process with pid 804981 00:17:32.898 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 804981 00:17:32.898 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.898 00:17:32.898 Latency(us) 00:17:32.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.898 =================================================================================================================== 00:17:32.898 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:32.898 [2024-07-15 16:10:18.894548] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:32.898 16:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 804981 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RnLjDCGy4p 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RnLjDCGy4p 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RnLjDCGy4p 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RnLjDCGy4p' 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=805117 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 805117 /var/tmp/bdevperf.sock 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 805117 ']' 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:33.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.156 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.416 [2024-07-15 16:10:19.196480] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:17:33.416 [2024-07-15 16:10:19.196564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805117 ] 00:17:33.416 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.416 [2024-07-15 16:10:19.254695] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.416 [2024-07-15 16:10:19.356615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.674 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:33.674 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:33.674 16:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.RnLjDCGy4p 00:17:33.934 [2024-07-15 16:10:19.719924] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:33.934 [2024-07-15 16:10:19.720080] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:33.934 [2024-07-15 16:10:19.726298] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:33.934 [2024-07-15 16:10:19.726334] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:33.934 [2024-07-15 16:10:19.726402] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:33.934 [2024-07-15 16:10:19.726910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a9f90 (107): Transport endpoint is not connected 00:17:33.934 [2024-07-15 16:10:19.727899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a9f90 (9): Bad file descriptor 00:17:33.934 [2024-07-15 16:10:19.728898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:33.934 [2024-07-15 16:10:19.728923] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:33.934 [2024-07-15 16:10:19.728961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:33.934 request: 00:17:33.934 { 00:17:33.934 "name": "TLSTEST", 00:17:33.934 "trtype": "tcp", 00:17:33.934 "traddr": "10.0.0.2", 00:17:33.934 "adrfam": "ipv4", 00:17:33.934 "trsvcid": "4420", 00:17:33.934 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.934 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:33.934 "prchk_reftag": false, 00:17:33.934 "prchk_guard": false, 00:17:33.934 "hdgst": false, 00:17:33.934 "ddgst": false, 00:17:33.934 "psk": "/tmp/tmp.RnLjDCGy4p", 00:17:33.934 "method": "bdev_nvme_attach_controller", 00:17:33.934 "req_id": 1 00:17:33.934 } 00:17:33.934 Got JSON-RPC error response 00:17:33.934 response: 00:17:33.934 { 00:17:33.934 "code": -5, 00:17:33.934 "message": "Input/output error" 00:17:33.934 } 00:17:33.934 16:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 805117 00:17:33.934 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 805117 ']' 00:17:33.934 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 805117 00:17:33.934 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:33.934 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:33.934 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 805117 00:17:33.934 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:33.934 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:33.934 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 805117' 00:17:33.934 killing process with pid 805117 00:17:33.934 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 805117 00:17:33.934 Received shutdown signal, test time was about 10.000000 seconds 00:17:33.934 00:17:33.934 Latency(us) 00:17:33.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.934 =================================================================================================================== 00:17:33.934 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:33.934 [2024-07-15 16:10:19.775710] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:33.934 16:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 805117 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RnLjDCGy4p 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RnLjDCGy4p 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RnLjDCGy4p 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RnLjDCGy4p' 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=805258 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 805258 /var/tmp/bdevperf.sock 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 805258 ']' 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:34.201 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.202 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:34.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:34.202 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.202 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.202 [2024-07-15 16:10:20.079007] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:17:34.202 [2024-07-15 16:10:20.079096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805258 ] 00:17:34.202 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.202 [2024-07-15 16:10:20.145113] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.459 [2024-07-15 16:10:20.261740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.459 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.459 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:34.459 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RnLjDCGy4p 00:17:34.719 [2024-07-15 16:10:20.646747] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:34.719 [2024-07-15 16:10:20.646877] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:34.719 [2024-07-15 16:10:20.658701] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:34.719 [2024-07-15 16:10:20.658735] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:34.719 [2024-07-15 16:10:20.658793] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:34.719 [2024-07-15 16:10:20.659017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe50f90 (107): Transport endpoint is not connected 00:17:34.719 [2024-07-15 16:10:20.660006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe50f90 (9): Bad file descriptor 00:17:34.719 [2024-07-15 16:10:20.661005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:34.719 [2024-07-15 16:10:20.661033] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:34.719 [2024-07-15 16:10:20.661051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:34.719 request: 00:17:34.719 { 00:17:34.719 "name": "TLSTEST", 00:17:34.719 "trtype": "tcp", 00:17:34.719 "traddr": "10.0.0.2", 00:17:34.719 "adrfam": "ipv4", 00:17:34.719 "trsvcid": "4420", 00:17:34.719 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:34.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:34.719 "prchk_reftag": false, 00:17:34.719 "prchk_guard": false, 00:17:34.719 "hdgst": false, 00:17:34.719 "ddgst": false, 00:17:34.719 "psk": "/tmp/tmp.RnLjDCGy4p", 00:17:34.719 "method": "bdev_nvme_attach_controller", 00:17:34.719 "req_id": 1 00:17:34.719 } 00:17:34.719 Got JSON-RPC error response 00:17:34.719 response: 00:17:34.719 { 00:17:34.719 "code": -5, 00:17:34.719 "message": "Input/output error" 00:17:34.719 } 00:17:34.719 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 805258 00:17:34.719 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 805258 ']' 00:17:34.719 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 805258 00:17:34.719 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:34.719 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:34.719 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 805258 00:17:34.719 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:34.719 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:34.719 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 805258' 00:17:34.719 killing process with pid 805258 00:17:34.719 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 805258 00:17:34.719 Received shutdown signal, test time was about 10.000000 seconds 00:17:34.719 00:17:34.719 Latency(us) 00:17:34.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.719 =================================================================================================================== 00:17:34.719 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:34.719 [2024-07-15 16:10:20.713121] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:34.719 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 805258 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=805314 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 805314 /var/tmp/bdevperf.sock 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 805314 ']' 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:34.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.978 16:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.237 [2024-07-15 16:10:21.006663] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:17:35.237 [2024-07-15 16:10:21.006748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805314 ] 00:17:35.237 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.237 [2024-07-15 16:10:21.066537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.237 [2024-07-15 16:10:21.170541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.495 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.495 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:35.495 16:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:35.752 [2024-07-15 16:10:21.520889] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:35.752 [2024-07-15 16:10:21.522665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1255770 (9): Bad file descriptor 00:17:35.752 [2024-07-15 16:10:21.523661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:35.752 [2024-07-15 16:10:21.523681] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:35.752 [2024-07-15 16:10:21.523708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:35.752 request: 00:17:35.752 { 00:17:35.752 "name": "TLSTEST", 00:17:35.752 "trtype": "tcp", 00:17:35.752 "traddr": "10.0.0.2", 00:17:35.752 "adrfam": "ipv4", 00:17:35.752 "trsvcid": "4420", 00:17:35.752 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.752 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.752 "prchk_reftag": false, 00:17:35.752 "prchk_guard": false, 00:17:35.752 "hdgst": false, 00:17:35.752 "ddgst": false, 00:17:35.752 "method": "bdev_nvme_attach_controller", 00:17:35.752 "req_id": 1 00:17:35.752 } 00:17:35.752 Got JSON-RPC error response 00:17:35.752 response: 00:17:35.752 { 00:17:35.752 "code": -5, 00:17:35.752 "message": "Input/output error" 00:17:35.752 } 00:17:35.752 16:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 805314 00:17:35.752 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 805314 ']' 00:17:35.752 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 805314 00:17:35.752 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:35.752 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.752 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 805314 00:17:35.752 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:35.752 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:35.752 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 805314' 00:17:35.752 killing process with pid 805314 00:17:35.752 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 805314 00:17:35.752 Received shutdown signal, test time was about 10.000000 seconds 00:17:35.752 00:17:35.752 Latency(us) 00:17:35.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.752 =================================================================================================================== 00:17:35.753 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:35.753 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 805314 00:17:36.009 16:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:36.009 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:36.009 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:36.009 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:36.009 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:36.009 16:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 801887 00:17:36.009 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 801887 ']' 00:17:36.009 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 801887 00:17:36.009 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:36.009 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:36.009 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 801887 00:17:36.009 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:36.009 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:36.010 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 801887' 00:17:36.010 killing process with pid 801887 00:17:36.010 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 801887 00:17:36.010 [2024-07-15 16:10:21.814867] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:36.010 16:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 801887 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.cDbVLeguM5 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.cDbVLeguM5 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=805517 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 805517 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 805517 ']' 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.269 16:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.269 [2024-07-15 16:10:22.177503] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:17:36.269 [2024-07-15 16:10:22.177599] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.269 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.269 [2024-07-15 16:10:22.242465] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.526 [2024-07-15 16:10:22.352123] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.526 [2024-07-15 16:10:22.352191] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.526 [2024-07-15 16:10:22.352205] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.526 [2024-07-15 16:10:22.352216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.526 [2024-07-15 16:10:22.352226] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.526 [2024-07-15 16:10:22.352254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.526 16:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.526 16:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:36.526 16:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:36.526 16:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:36.526 16:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.526 16:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.526 16:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.cDbVLeguM5 00:17:36.526 16:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.cDbVLeguM5 00:17:36.526 16:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:36.783 [2024-07-15 16:10:22.761618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.783 16:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:37.349 16:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:37.349 [2024-07-15 16:10:23.299089] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:37.349 [2024-07-15 16:10:23.299360] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.349 16:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:37.607 malloc0 00:17:37.607 16:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:37.864 16:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cDbVLeguM5 00:17:38.121 [2024-07-15 16:10:24.035918] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:38.121 16:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cDbVLeguM5 00:17:38.121 16:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:38.121 16:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:38.121 16:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:38.121 16:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cDbVLeguM5' 00:17:38.121 16:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:38.121 16:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=805710 00:17:38.121 16:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:38.121 16:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:38.121 16:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 805710 /var/tmp/bdevperf.sock 00:17:38.121 16:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 805710 ']' 00:17:38.121 16:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.121 16:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.121 16:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.121 16:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.121 16:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.121 [2024-07-15 16:10:24.100520] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:17:38.121 [2024-07-15 16:10:24.100600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805710 ] 00:17:38.378 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.378 [2024-07-15 16:10:24.159348] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.378 [2024-07-15 16:10:24.265606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.379 16:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.379 16:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:38.379 16:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cDbVLeguM5 00:17:38.638 [2024-07-15 16:10:24.594328] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.638 [2024-07-15 16:10:24.594445] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:38.897 TLSTESTn1 00:17:38.897 16:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:38.897 Running I/O for 10 seconds... 00:17:48.918 00:17:48.918 Latency(us) 00:17:48.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.918 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:48.918 Verification LBA range: start 0x0 length 0x2000 00:17:48.918 TLSTESTn1 : 10.02 3300.79 12.89 0.00 0.00 38711.81 7330.32 36117.62 00:17:48.918 =================================================================================================================== 00:17:48.918 Total : 3300.79 12.89 0.00 0.00 38711.81 7330.32 36117.62 00:17:48.918 0 00:17:48.918 16:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:48.918 16:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 805710 00:17:48.918 16:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 805710 ']' 00:17:48.918 16:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 805710 00:17:48.918 16:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:48.918 16:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.918 16:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 805710 00:17:48.918 16:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:48.918 16:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:48.918 16:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 805710' 00:17:48.918 killing process with pid 805710 00:17:48.919 16:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 805710 00:17:48.919 Received shutdown signal, test time was about 10.000000 seconds 00:17:48.919 00:17:48.919 Latency(us) 00:17:48.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.919 =================================================================================================================== 00:17:48.919 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:48.919 [2024-07-15 16:10:34.862300] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:48.919 16:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 805710 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.cDbVLeguM5 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cDbVLeguM5 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cDbVLeguM5 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cDbVLeguM5 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cDbVLeguM5' 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=807023 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 807023 /var/tmp/bdevperf.sock 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 807023 ']' 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.176 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.176 [2024-07-15 16:10:35.144280] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:17:49.176 [2024-07-15 16:10:35.144367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid807023 ] 00:17:49.176 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.433 [2024-07-15 16:10:35.204688] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.433 [2024-07-15 16:10:35.309938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.433 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.433 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:49.433 16:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cDbVLeguM5 00:17:49.691 [2024-07-15 16:10:35.636697] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:49.692 [2024-07-15 16:10:35.636781] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:49.692 [2024-07-15 16:10:35.636794] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.cDbVLeguM5 00:17:49.692 request: 00:17:49.692 { 00:17:49.692 "name": "TLSTEST", 00:17:49.692 "trtype": "tcp", 00:17:49.692 "traddr": "10.0.0.2", 00:17:49.692 "adrfam": "ipv4", 00:17:49.692 "trsvcid": "4420", 00:17:49.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.692 "prchk_reftag": false, 00:17:49.692 "prchk_guard": false, 00:17:49.692 "hdgst": false, 00:17:49.692 "ddgst": false, 00:17:49.692 "psk": "/tmp/tmp.cDbVLeguM5", 00:17:49.692 "method": "bdev_nvme_attach_controller", 00:17:49.692 "req_id": 1 00:17:49.692 } 00:17:49.692 Got JSON-RPC error response 00:17:49.692 response: 00:17:49.692 { 00:17:49.692 "code": -1, 00:17:49.692 "message": "Operation not permitted" 00:17:49.692 } 00:17:49.692 16:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 807023 00:17:49.692 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 807023 ']' 00:17:49.692 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 807023 00:17:49.692 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:49.692 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:49.692 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 807023 00:17:49.692 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:49.692 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:49.692 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 807023' 00:17:49.692 killing process with pid 807023 00:17:49.692 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 807023 00:17:49.692 Received shutdown signal, test time was about 10.000000 seconds 00:17:49.692 00:17:49.692 Latency(us) 00:17:49.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.692 =================================================================================================================== 00:17:49.692 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:49.692 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 807023 00:17:49.950 16:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:49.950 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:49.950 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:49.950 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:49.950 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:49.950 16:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 805517 00:17:49.950 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 805517 ']' 00:17:49.950 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 805517 00:17:49.950 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:49.950 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:49.950 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 805517 00:17:50.209 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:50.209 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:50.209 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 805517' 00:17:50.209 killing process with pid 805517 00:17:50.209 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 805517 00:17:50.209 [2024-07-15 16:10:35.957987] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:50.209 16:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 805517 00:17:50.468 16:10:36 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:50.468 16:10:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:50.468 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:50.468 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.468 16:10:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:50.468 16:10:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=807170 00:17:50.468 16:10:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 807170 00:17:50.468 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 807170 ']' 00:17:50.468 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.468 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.468 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.468 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.468 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.468 [2024-07-15 16:10:36.285477] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:17:50.468 [2024-07-15 16:10:36.285566] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.468 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.468 [2024-07-15 16:10:36.348873] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.468 [2024-07-15 16:10:36.449426] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.468 [2024-07-15 16:10:36.449487] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.468 [2024-07-15 16:10:36.449511] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.468 [2024-07-15 16:10:36.449521] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.468 [2024-07-15 16:10:36.449530] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.468 [2024-07-15 16:10:36.449556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.726 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.726 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:50.726 16:10:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:50.726 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:50.726 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.726 16:10:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.726 16:10:36 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.cDbVLeguM5 00:17:50.726 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:50.726 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.cDbVLeguM5 00:17:50.726 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:17:50.726 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:50.726 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:17:50.726 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:50.726 16:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.cDbVLeguM5 00:17:50.726 16:10:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.cDbVLeguM5 00:17:50.726 16:10:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:50.984 [2024-07-15 16:10:36.799220] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.984 16:10:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:51.241 16:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:51.499 [2024-07-15 16:10:37.288503] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:51.499 [2024-07-15 16:10:37.288734] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.499 16:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:51.755 malloc0 00:17:51.755 16:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:52.014 16:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cDbVLeguM5 00:17:52.274 [2024-07-15 16:10:38.033684] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:52.274 [2024-07-15 16:10:38.033722] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:52.274 [2024-07-15 16:10:38.033761] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:52.274 request: 00:17:52.274 { 00:17:52.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.274 "host": "nqn.2016-06.io.spdk:host1", 00:17:52.274 "psk": "/tmp/tmp.cDbVLeguM5", 00:17:52.274 "method": "nvmf_subsystem_add_host", 00:17:52.274 "req_id": 1 00:17:52.274 } 00:17:52.274 Got JSON-RPC error response 00:17:52.274 response: 00:17:52.274 { 00:17:52.274 "code": -32603, 00:17:52.274 "message": "Internal error" 00:17:52.274 } 00:17:52.274 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:52.274 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:52.274 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:52.274 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:52.274 16:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 807170 00:17:52.274 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 807170 ']' 00:17:52.274 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 807170 00:17:52.274 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:52.274 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:52.274 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 807170 00:17:52.274 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:52.274 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:52.274 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 807170' 00:17:52.274 killing process with pid 807170 00:17:52.274 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 807170 00:17:52.274 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 807170 00:17:52.533 16:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.cDbVLeguM5 00:17:52.533 16:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:52.533 16:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:52.533 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:52.533 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.533 16:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=807467 00:17:52.533 16:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:52.533 16:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 807467 00:17:52.533 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 807467 ']' 00:17:52.533 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.533 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.533 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.533 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.533 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.533 [2024-07-15 16:10:38.404585] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:17:52.533 [2024-07-15 16:10:38.404673] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.533 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.533 [2024-07-15 16:10:38.466657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.792 [2024-07-15 16:10:38.571924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.792 [2024-07-15 16:10:38.571982] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.792 [2024-07-15 16:10:38.572005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.792 [2024-07-15 16:10:38.572016] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.792 [2024-07-15 16:10:38.572026] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.792 [2024-07-15 16:10:38.572060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.792 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.792 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:52.792 16:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:52.792 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:52.792 16:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.792 16:10:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.792 16:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.cDbVLeguM5 00:17:52.792 16:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.cDbVLeguM5 00:17:52.792 16:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:53.050 [2024-07-15 16:10:38.939061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.050 16:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:53.312 16:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:53.584 [2024-07-15 16:10:39.428384] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:53.584 [2024-07-15 16:10:39.428620] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.584 16:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:53.842 malloc0 00:17:53.842 16:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:54.100 16:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cDbVLeguM5 00:17:54.358 [2024-07-15 16:10:40.164495] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:54.358 16:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=807739 00:17:54.358 16:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:54.358 16:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:54.358 16:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 807739 /var/tmp/bdevperf.sock 00:17:54.358 16:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 807739 ']' 00:17:54.358 16:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.358 16:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.358 16:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.358 16:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.358 16:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.358 [2024-07-15 16:10:40.225140] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:17:54.358 [2024-07-15 16:10:40.225216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid807739 ] 00:17:54.358 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.358 [2024-07-15 16:10:40.281467] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.614 [2024-07-15 16:10:40.387922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.614 16:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.614 16:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:54.614 16:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cDbVLeguM5 00:17:54.870 [2024-07-15 16:10:40.717019] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:54.870 [2024-07-15 16:10:40.717182] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:54.870 TLSTESTn1 00:17:54.870 16:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:55.128 16:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:17:55.128 "subsystems": [ 00:17:55.128 { 00:17:55.128 "subsystem": "keyring", 00:17:55.128 "config": [] 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "subsystem": "iobuf", 00:17:55.128 "config": [ 00:17:55.128 { 00:17:55.128 "method": "iobuf_set_options", 00:17:55.128 "params": { 00:17:55.128 "small_pool_count": 8192, 00:17:55.128 "large_pool_count": 1024, 00:17:55.128 "small_bufsize": 8192, 00:17:55.128 "large_bufsize": 135168 00:17:55.128 } 00:17:55.128 } 00:17:55.128 ] 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "subsystem": "sock", 00:17:55.128 "config": [ 00:17:55.128 { 00:17:55.128 "method": "sock_set_default_impl", 00:17:55.128 "params": { 00:17:55.128 "impl_name": "posix" 00:17:55.128 } 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "method": "sock_impl_set_options", 00:17:55.128 "params": { 00:17:55.128 "impl_name": "ssl", 00:17:55.128 "recv_buf_size": 4096, 00:17:55.128 "send_buf_size": 4096, 00:17:55.128 "enable_recv_pipe": true, 00:17:55.128 "enable_quickack": false, 00:17:55.128 "enable_placement_id": 0, 00:17:55.128 "enable_zerocopy_send_server": true, 00:17:55.128 "enable_zerocopy_send_client": false, 00:17:55.128 "zerocopy_threshold": 0, 00:17:55.128 "tls_version": 0, 00:17:55.128 "enable_ktls": false 00:17:55.128 } 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "method": "sock_impl_set_options", 00:17:55.128 "params": { 00:17:55.128 "impl_name": "posix", 00:17:55.128 "recv_buf_size": 2097152, 00:17:55.128 "send_buf_size": 2097152, 00:17:55.128 "enable_recv_pipe": true, 00:17:55.128 "enable_quickack": false, 00:17:55.128 "enable_placement_id": 0, 00:17:55.128 "enable_zerocopy_send_server": true, 00:17:55.128 "enable_zerocopy_send_client": false, 00:17:55.128 "zerocopy_threshold": 0, 00:17:55.128 "tls_version": 0, 00:17:55.128 "enable_ktls": false 00:17:55.128 } 00:17:55.128 } 00:17:55.128 ] 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "subsystem": "vmd", 00:17:55.128 "config": [] 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "subsystem": "accel", 00:17:55.128 "config": [ 00:17:55.128 { 00:17:55.128 "method": "accel_set_options", 00:17:55.128 "params": { 00:17:55.128 "small_cache_size": 128, 00:17:55.128 "large_cache_size": 16, 00:17:55.128 "task_count": 2048, 00:17:55.128 "sequence_count": 2048, 00:17:55.128 "buf_count": 2048 00:17:55.128 } 00:17:55.128 } 00:17:55.128 ] 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "subsystem": "bdev", 00:17:55.128 "config": [ 00:17:55.128 { 00:17:55.128 "method": "bdev_set_options", 00:17:55.128 "params": { 00:17:55.128 "bdev_io_pool_size": 65535, 00:17:55.128 "bdev_io_cache_size": 256, 00:17:55.128 "bdev_auto_examine": true, 00:17:55.128 "iobuf_small_cache_size": 128, 00:17:55.128 "iobuf_large_cache_size": 16 00:17:55.128 } 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "method": "bdev_raid_set_options", 00:17:55.128 "params": { 00:17:55.128 "process_window_size_kb": 1024 00:17:55.128 } 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "method": "bdev_iscsi_set_options", 00:17:55.128 "params": { 00:17:55.128 "timeout_sec": 30 00:17:55.128 } 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "method": "bdev_nvme_set_options", 00:17:55.128 "params": { 00:17:55.128 "action_on_timeout": "none", 00:17:55.128 "timeout_us": 0, 00:17:55.128 "timeout_admin_us": 0, 00:17:55.128 "keep_alive_timeout_ms": 10000, 00:17:55.128 "arbitration_burst": 0, 00:17:55.128 "low_priority_weight": 0, 00:17:55.128 "medium_priority_weight": 0, 00:17:55.128 "high_priority_weight": 0, 00:17:55.128 "nvme_adminq_poll_period_us": 10000, 00:17:55.128 "nvme_ioq_poll_period_us": 0, 00:17:55.128 "io_queue_requests": 0, 00:17:55.128 "delay_cmd_submit": true, 00:17:55.128 "transport_retry_count": 4, 00:17:55.128 "bdev_retry_count": 3, 00:17:55.128 "transport_ack_timeout": 0, 00:17:55.128 "ctrlr_loss_timeout_sec": 0, 00:17:55.128 "reconnect_delay_sec": 0, 00:17:55.128 "fast_io_fail_timeout_sec": 0, 00:17:55.128 "disable_auto_failback": false, 00:17:55.128 "generate_uuids": false, 00:17:55.128 "transport_tos": 0, 00:17:55.128 "nvme_error_stat": false, 00:17:55.128 "rdma_srq_size": 0, 00:17:55.128 "io_path_stat": false, 00:17:55.128 "allow_accel_sequence": false, 00:17:55.128 "rdma_max_cq_size": 0, 00:17:55.128 "rdma_cm_event_timeout_ms": 0, 00:17:55.128 "dhchap_digests": [ 00:17:55.128 "sha256", 00:17:55.128 "sha384", 00:17:55.128 "sha512" 00:17:55.128 ], 00:17:55.128 "dhchap_dhgroups": [ 00:17:55.128 "null", 00:17:55.128 "ffdhe2048", 00:17:55.128 "ffdhe3072", 00:17:55.128 "ffdhe4096", 00:17:55.128 "ffdhe6144", 00:17:55.128 "ffdhe8192" 00:17:55.128 ] 00:17:55.128 } 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "method": "bdev_nvme_set_hotplug", 00:17:55.128 "params": { 00:17:55.128 "period_us": 100000, 00:17:55.128 "enable": false 00:17:55.128 } 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "method": "bdev_malloc_create", 00:17:55.128 "params": { 00:17:55.128 "name": "malloc0", 00:17:55.128 "num_blocks": 8192, 00:17:55.128 "block_size": 4096, 00:17:55.128 "physical_block_size": 4096, 00:17:55.128 "uuid": "f585ecda-b712-49e9-b162-f50e9ac94b67", 00:17:55.128 "optimal_io_boundary": 0 00:17:55.128 } 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "method": "bdev_wait_for_examine" 00:17:55.128 } 00:17:55.128 ] 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "subsystem": "nbd", 00:17:55.128 "config": [] 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "subsystem": "scheduler", 00:17:55.128 "config": [ 00:17:55.128 { 00:17:55.128 "method": "framework_set_scheduler", 00:17:55.128 "params": { 00:17:55.128 "name": "static" 00:17:55.128 } 00:17:55.128 } 00:17:55.128 ] 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "subsystem": "nvmf", 00:17:55.128 "config": [ 00:17:55.128 { 00:17:55.128 "method": "nvmf_set_config", 00:17:55.128 "params": { 00:17:55.128 "discovery_filter": "match_any", 00:17:55.128 "admin_cmd_passthru": { 00:17:55.128 "identify_ctrlr": false 00:17:55.128 } 00:17:55.128 } 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "method": "nvmf_set_max_subsystems", 00:17:55.128 "params": { 00:17:55.128 "max_subsystems": 1024 00:17:55.128 } 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "method": "nvmf_set_crdt", 00:17:55.128 "params": { 00:17:55.128 "crdt1": 0, 00:17:55.128 "crdt2": 0, 00:17:55.128 "crdt3": 0 00:17:55.128 } 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "method": "nvmf_create_transport", 00:17:55.128 "params": { 00:17:55.128 "trtype": "TCP", 00:17:55.128 "max_queue_depth": 128, 00:17:55.128 "max_io_qpairs_per_ctrlr": 127, 00:17:55.128 "in_capsule_data_size": 4096, 00:17:55.128 "max_io_size": 131072, 00:17:55.128 "io_unit_size": 131072, 00:17:55.128 "max_aq_depth": 128, 00:17:55.128 "num_shared_buffers": 511, 00:17:55.128 "buf_cache_size": 4294967295, 00:17:55.128 "dif_insert_or_strip": false, 00:17:55.128 "zcopy": false, 00:17:55.128 "c2h_success": false, 00:17:55.128 "sock_priority": 0, 00:17:55.128 "abort_timeout_sec": 1, 00:17:55.128 "ack_timeout": 0, 00:17:55.128 "data_wr_pool_size": 0 00:17:55.128 } 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "method": "nvmf_create_subsystem", 00:17:55.128 "params": { 00:17:55.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.128 "allow_any_host": false, 00:17:55.128 "serial_number": "SPDK00000000000001", 00:17:55.128 "model_number": "SPDK bdev Controller", 00:17:55.128 "max_namespaces": 10, 00:17:55.128 "min_cntlid": 1, 00:17:55.128 "max_cntlid": 65519, 00:17:55.128 "ana_reporting": false 00:17:55.128 } 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "method": "nvmf_subsystem_add_host", 00:17:55.128 "params": { 00:17:55.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.128 "host": "nqn.2016-06.io.spdk:host1", 00:17:55.128 "psk": "/tmp/tmp.cDbVLeguM5" 00:17:55.128 } 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "method": "nvmf_subsystem_add_ns", 00:17:55.128 "params": { 00:17:55.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.128 "namespace": { 00:17:55.128 "nsid": 1, 00:17:55.128 "bdev_name": "malloc0", 00:17:55.128 "nguid": "F585ECDAB71249E9B162F50E9AC94B67", 00:17:55.128 "uuid": "f585ecda-b712-49e9-b162-f50e9ac94b67", 00:17:55.128 "no_auto_visible": false 00:17:55.128 } 00:17:55.128 } 00:17:55.128 }, 00:17:55.128 { 00:17:55.128 "method": "nvmf_subsystem_add_listener", 00:17:55.128 "params": { 00:17:55.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.128 "listen_address": { 00:17:55.128 "trtype": "TCP", 00:17:55.128 "adrfam": "IPv4", 00:17:55.128 "traddr": "10.0.0.2", 00:17:55.128 "trsvcid": "4420" 00:17:55.128 }, 00:17:55.128 "secure_channel": true 00:17:55.128 } 00:17:55.128 } 00:17:55.128 ] 00:17:55.128 } 00:17:55.128 ] 00:17:55.128 }' 00:17:55.128 16:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:55.693 16:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:55.693 "subsystems": [ 00:17:55.693 { 00:17:55.693 "subsystem": "keyring", 00:17:55.693 "config": [] 00:17:55.693 }, 00:17:55.693 { 00:17:55.693 "subsystem": "iobuf", 00:17:55.693 "config": [ 00:17:55.693 { 00:17:55.693 "method": "iobuf_set_options", 00:17:55.693 "params": { 00:17:55.693 "small_pool_count": 8192, 00:17:55.693 "large_pool_count": 1024, 00:17:55.693 "small_bufsize": 8192, 00:17:55.693 "large_bufsize": 135168 00:17:55.693 } 00:17:55.693 } 00:17:55.693 ] 00:17:55.693 }, 00:17:55.693 { 00:17:55.693 "subsystem": "sock", 00:17:55.693 "config": [ 00:17:55.693 { 00:17:55.693 "method": "sock_set_default_impl", 00:17:55.693 "params": { 00:17:55.693 "impl_name": "posix" 00:17:55.693 } 00:17:55.693 }, 00:17:55.693 { 00:17:55.693 "method": "sock_impl_set_options", 00:17:55.693 "params": { 00:17:55.693 "impl_name": "ssl", 00:17:55.693 "recv_buf_size": 4096, 00:17:55.693 "send_buf_size": 4096, 00:17:55.693 "enable_recv_pipe": true, 00:17:55.693 "enable_quickack": false, 00:17:55.693 "enable_placement_id": 0, 00:17:55.693 "enable_zerocopy_send_server": true, 00:17:55.693 "enable_zerocopy_send_client": false, 00:17:55.693 "zerocopy_threshold": 0, 00:17:55.693 "tls_version": 0, 00:17:55.693 "enable_ktls": false 00:17:55.693 } 00:17:55.693 }, 00:17:55.693 { 00:17:55.693 "method": "sock_impl_set_options", 00:17:55.693 "params": { 00:17:55.693 "impl_name": "posix", 00:17:55.693 "recv_buf_size": 2097152, 00:17:55.693 "send_buf_size": 2097152, 00:17:55.693 "enable_recv_pipe": true, 00:17:55.693 "enable_quickack": false, 00:17:55.693 "enable_placement_id": 0, 00:17:55.693 "enable_zerocopy_send_server": true, 00:17:55.693 "enable_zerocopy_send_client": false, 00:17:55.693 "zerocopy_threshold": 0, 00:17:55.693 "tls_version": 0, 00:17:55.693 "enable_ktls": false 00:17:55.693 } 00:17:55.693 } 00:17:55.693 ] 00:17:55.693 }, 00:17:55.693 { 00:17:55.693 "subsystem": "vmd", 00:17:55.693 "config": [] 00:17:55.693 }, 00:17:55.693 { 00:17:55.693 "subsystem": "accel", 00:17:55.693 "config": [ 00:17:55.693 { 00:17:55.693 "method": "accel_set_options", 00:17:55.693 "params": { 00:17:55.693 "small_cache_size": 128, 00:17:55.693 "large_cache_size": 16, 00:17:55.693 "task_count": 2048, 00:17:55.693 "sequence_count": 2048, 00:17:55.693 "buf_count": 2048 00:17:55.693 } 00:17:55.693 } 00:17:55.693 ] 00:17:55.693 }, 00:17:55.693 { 00:17:55.693 "subsystem": "bdev", 00:17:55.693 "config": [ 00:17:55.693 { 00:17:55.693 "method": "bdev_set_options", 00:17:55.693 "params": { 00:17:55.693 "bdev_io_pool_size": 65535, 00:17:55.693 "bdev_io_cache_size": 256, 00:17:55.693 "bdev_auto_examine": true, 00:17:55.693 "iobuf_small_cache_size": 128, 00:17:55.693 "iobuf_large_cache_size": 16 00:17:55.693 } 00:17:55.693 }, 00:17:55.693 { 00:17:55.693 "method": "bdev_raid_set_options", 00:17:55.693 "params": { 00:17:55.693 "process_window_size_kb": 1024 00:17:55.693 } 00:17:55.693 }, 00:17:55.694 { 00:17:55.694 "method": "bdev_iscsi_set_options", 00:17:55.694 "params": { 00:17:55.694 "timeout_sec": 30 00:17:55.694 } 00:17:55.694 }, 00:17:55.694 { 00:17:55.694 "method": "bdev_nvme_set_options", 00:17:55.694 "params": { 00:17:55.694 "action_on_timeout": "none", 00:17:55.694 "timeout_us": 0, 00:17:55.694 "timeout_admin_us": 0, 00:17:55.694 "keep_alive_timeout_ms": 10000, 00:17:55.694 "arbitration_burst": 0, 00:17:55.694 "low_priority_weight": 0, 00:17:55.694 "medium_priority_weight": 0, 00:17:55.694 "high_priority_weight": 0, 00:17:55.694 "nvme_adminq_poll_period_us": 10000, 00:17:55.694 "nvme_ioq_poll_period_us": 0, 00:17:55.694 "io_queue_requests": 512, 00:17:55.694 "delay_cmd_submit": true, 00:17:55.694 "transport_retry_count": 4, 00:17:55.694 "bdev_retry_count": 3, 00:17:55.694 "transport_ack_timeout": 0, 00:17:55.694 "ctrlr_loss_timeout_sec": 0, 00:17:55.694 "reconnect_delay_sec": 0, 00:17:55.694 "fast_io_fail_timeout_sec": 0, 00:17:55.694 "disable_auto_failback": false, 00:17:55.694 "generate_uuids": false, 00:17:55.694 "transport_tos": 0, 00:17:55.694 "nvme_error_stat": false, 00:17:55.694 "rdma_srq_size": 0, 00:17:55.694 "io_path_stat": false, 00:17:55.694 "allow_accel_sequence": false, 00:17:55.694 "rdma_max_cq_size": 0, 00:17:55.694 "rdma_cm_event_timeout_ms": 0, 00:17:55.694 "dhchap_digests": [ 00:17:55.694 "sha256", 00:17:55.694 "sha384", 00:17:55.694 "sha512" 00:17:55.694 ], 00:17:55.694 "dhchap_dhgroups": [ 00:17:55.694 "null", 00:17:55.694 "ffdhe2048", 00:17:55.694 "ffdhe3072", 00:17:55.694 "ffdhe4096", 00:17:55.694 "ffdhe6144", 00:17:55.694 "ffdhe8192" 00:17:55.694 ] 00:17:55.694 } 00:17:55.694 }, 00:17:55.694 { 00:17:55.694 "method": "bdev_nvme_attach_controller", 00:17:55.694 "params": { 00:17:55.694 "name": "TLSTEST", 00:17:55.694 "trtype": "TCP", 00:17:55.694 "adrfam": "IPv4", 00:17:55.694 "traddr": "10.0.0.2", 00:17:55.694 "trsvcid": "4420", 00:17:55.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.694 "prchk_reftag": false, 00:17:55.694 "prchk_guard": false, 00:17:55.694 "ctrlr_loss_timeout_sec": 0, 00:17:55.694 "reconnect_delay_sec": 0, 00:17:55.694 "fast_io_fail_timeout_sec": 0, 00:17:55.694 "psk": "/tmp/tmp.cDbVLeguM5", 00:17:55.694 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:55.694 "hdgst": false, 00:17:55.694 "ddgst": false 00:17:55.694 } 00:17:55.694 }, 00:17:55.694 { 00:17:55.694 "method": "bdev_nvme_set_hotplug", 00:17:55.694 "params": { 00:17:55.694 "period_us": 100000, 00:17:55.694 "enable": false 00:17:55.694 } 00:17:55.694 }, 00:17:55.694 { 00:17:55.694 "method": "bdev_wait_for_examine" 00:17:55.694 } 00:17:55.694 ] 00:17:55.694 }, 00:17:55.694 { 00:17:55.694 "subsystem": "nbd", 00:17:55.694 "config": [] 00:17:55.694 } 00:17:55.694 ] 00:17:55.694 }' 00:17:55.694 16:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 807739 00:17:55.694 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 807739 ']' 00:17:55.694 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 807739 00:17:55.694 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:55.694 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.694 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 807739 00:17:55.694 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:55.694 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:55.694 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 807739' 00:17:55.694 killing process with pid 807739 00:17:55.694 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 807739 00:17:55.694 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.694 00:17:55.694 Latency(us) 00:17:55.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.694 =================================================================================================================== 00:17:55.694 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:55.694 [2024-07-15 16:10:41.470178] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:55.694 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 807739 00:17:55.952 16:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 807467 00:17:55.952 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 807467 ']' 00:17:55.952 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 807467 00:17:55.952 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:55.952 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.952 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 807467 00:17:55.952 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:55.952 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:55.952 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 807467' 00:17:55.952 killing process with pid 807467 00:17:55.952 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 807467 00:17:55.952 [2024-07-15 16:10:41.730634] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:55.952 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 807467 00:17:56.210 16:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:56.210 16:10:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:56.210 16:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:17:56.210 "subsystems": [ 00:17:56.210 { 00:17:56.210 "subsystem": "keyring", 00:17:56.210 "config": [] 00:17:56.210 }, 00:17:56.210 { 00:17:56.210 "subsystem": "iobuf", 00:17:56.210 "config": [ 00:17:56.210 { 00:17:56.210 "method": "iobuf_set_options", 00:17:56.210 "params": { 00:17:56.210 "small_pool_count": 8192, 00:17:56.210 "large_pool_count": 1024, 00:17:56.210 "small_bufsize": 8192, 00:17:56.210 "large_bufsize": 135168 00:17:56.210 } 00:17:56.210 } 00:17:56.210 ] 00:17:56.210 }, 00:17:56.210 { 00:17:56.210 "subsystem": "sock", 00:17:56.210 "config": [ 00:17:56.210 { 00:17:56.210 "method": "sock_set_default_impl", 00:17:56.210 "params": { 00:17:56.210 "impl_name": "posix" 00:17:56.210 } 00:17:56.210 }, 00:17:56.210 { 00:17:56.210 "method": "sock_impl_set_options", 00:17:56.210 "params": { 00:17:56.210 "impl_name": "ssl", 00:17:56.210 "recv_buf_size": 4096, 00:17:56.210 "send_buf_size": 4096, 00:17:56.210 "enable_recv_pipe": true, 00:17:56.210 "enable_quickack": false, 00:17:56.210 "enable_placement_id": 0, 00:17:56.210 "enable_zerocopy_send_server": true, 00:17:56.210 "enable_zerocopy_send_client": false, 00:17:56.210 "zerocopy_threshold": 0, 00:17:56.210 "tls_version": 0, 00:17:56.210 "enable_ktls": false 00:17:56.210 } 00:17:56.210 }, 00:17:56.210 { 00:17:56.210 "method": "sock_impl_set_options", 00:17:56.210 "params": { 00:17:56.210 "impl_name": "posix", 00:17:56.210 "recv_buf_size": 2097152, 00:17:56.210 "send_buf_size": 2097152, 00:17:56.210 "enable_recv_pipe": true, 00:17:56.210 "enable_quickack": false, 00:17:56.210 "enable_placement_id": 0, 00:17:56.210 "enable_zerocopy_send_server": true, 00:17:56.210 "enable_zerocopy_send_client": false, 00:17:56.210 "zerocopy_threshold": 0, 00:17:56.210 "tls_version": 0, 00:17:56.210 "enable_ktls": false 00:17:56.210 } 00:17:56.210 } 00:17:56.210 ] 00:17:56.210 }, 00:17:56.210 { 00:17:56.210 "subsystem": "vmd", 00:17:56.210 "config": [] 00:17:56.210 }, 00:17:56.210 { 00:17:56.210 "subsystem": "accel", 00:17:56.210 "config": [ 00:17:56.210 { 00:17:56.210 "method": "accel_set_options", 00:17:56.210 "params": { 00:17:56.210 "small_cache_size": 128, 00:17:56.210 "large_cache_size": 16, 00:17:56.210 "task_count": 2048, 00:17:56.210 "sequence_count": 2048, 00:17:56.210 "buf_count": 2048 00:17:56.210 } 00:17:56.210 } 00:17:56.210 ] 00:17:56.210 }, 00:17:56.210 { 00:17:56.210 "subsystem": "bdev", 00:17:56.210 "config": [ 00:17:56.210 { 00:17:56.210 "method": "bdev_set_options", 00:17:56.210 "params": { 00:17:56.210 "bdev_io_pool_size": 65535, 00:17:56.211 "bdev_io_cache_size": 256, 00:17:56.211 "bdev_auto_examine": true, 00:17:56.211 "iobuf_small_cache_size": 128, 00:17:56.211 "iobuf_large_cache_size": 16 00:17:56.211 } 00:17:56.211 }, 00:17:56.211 { 00:17:56.211 "method": "bdev_raid_set_options", 00:17:56.211 "params": { 00:17:56.211 "process_window_size_kb": 1024 00:17:56.211 } 00:17:56.211 }, 00:17:56.211 { 00:17:56.211 "method": "bdev_iscsi_set_options", 00:17:56.211 "params": { 00:17:56.211 "timeout_sec": 30 00:17:56.211 } 00:17:56.211 }, 00:17:56.211 { 00:17:56.211 "method": "bdev_nvme_set_options", 00:17:56.211 "params": { 00:17:56.211 "action_on_timeout": "none", 00:17:56.211 "timeout_us": 0, 00:17:56.211 "timeout_admin_us": 0, 00:17:56.211 "keep_alive_timeout_ms": 10000, 00:17:56.211 "arbitration_burst": 0, 00:17:56.211 "low_priority_weight": 0, 00:17:56.211 "medium_priority_weight": 0, 00:17:56.211 "high_priority_weight": 0, 00:17:56.211 "nvme_adminq_poll_period_us": 10000, 00:17:56.211 "nvme_ioq_poll_period_us": 0, 00:17:56.211 "io_queue_requests": 0, 00:17:56.211 "delay_cmd_submit": true, 00:17:56.211 "transport_retry_count": 4, 00:17:56.211 "bdev_retry_count": 3, 00:17:56.211 "transport_ack_timeout": 0, 00:17:56.211 "ctrlr_loss_timeout_sec": 0, 00:17:56.211 "reconnect_delay_sec": 0, 00:17:56.211 "fast_io_fail_timeout_sec": 0, 00:17:56.211 "disable_auto_failback": false, 00:17:56.211 "generate_uuids": false, 00:17:56.211 "transport_tos": 0, 00:17:56.211 "nvme_error_stat": false, 00:17:56.211 "rdma_srq_size": 0, 00:17:56.211 "io_path_stat": false, 00:17:56.211 "allow_accel_sequence": false, 00:17:56.211 "rdma_max_cq_size": 0, 00:17:56.211 "rdma_cm_event_timeout_ms": 0, 00:17:56.211 "dhchap_digests": [ 00:17:56.211 "sha256", 00:17:56.211 "sha384", 00:17:56.211 "sha512" 00:17:56.211 ], 00:17:56.211 "dhchap_dhgroups": [ 00:17:56.211 "null", 00:17:56.211 "ffdhe2048", 00:17:56.211 "ffdhe3072", 00:17:56.211 "ffdhe4096", 00:17:56.211 "ffdhe6144", 00:17:56.211 "ffdhe8192" 00:17:56.211 ] 00:17:56.211 } 00:17:56.211 }, 00:17:56.211 { 00:17:56.211 "method": "bdev_nvme_set_hotplug", 00:17:56.211 "params": { 00:17:56.211 "period_us": 100000, 00:17:56.211 "enable": false 00:17:56.211 } 00:17:56.211 }, 00:17:56.211 { 00:17:56.211 "method": "bdev_malloc_create", 00:17:56.211 "params": { 00:17:56.211 "name": "malloc0", 00:17:56.211 "num_blocks": 8192, 00:17:56.211 "block_size": 4096, 00:17:56.211 "physical_block_size": 4096, 00:17:56.211 "uuid": "f585ecda-b712-49e9-b162-f50e9ac94b67", 00:17:56.211 "optimal_io_boundary": 0 00:17:56.211 } 00:17:56.211 }, 00:17:56.211 { 00:17:56.211 "method": "bdev_wait_for_examine" 00:17:56.211 } 00:17:56.211 ] 00:17:56.211 }, 00:17:56.211 { 00:17:56.211 "subsystem": "nbd", 00:17:56.211 "config": [] 00:17:56.211 }, 00:17:56.211 { 00:17:56.211 "subsystem": "scheduler", 00:17:56.211 "config": [ 00:17:56.211 { 00:17:56.211 "method": "framework_set_scheduler", 00:17:56.211 "params": { 00:17:56.211 "name": "static" 00:17:56.211 } 00:17:56.211 } 00:17:56.211 ] 00:17:56.211 }, 00:17:56.211 { 00:17:56.211 "subsystem": "nvmf", 00:17:56.211 "config": [ 00:17:56.211 { 00:17:56.211 "method": "nvmf_set_config", 00:17:56.211 "params": { 00:17:56.211 "discovery_filter": "match_any", 00:17:56.211 "admin_cmd_passthru": { 00:17:56.211 "identify_ctrlr": false 00:17:56.211 } 00:17:56.211 } 00:17:56.211 }, 00:17:56.211 { 00:17:56.211 "method": "nvmf_set_max_subsystems", 00:17:56.211 "params": { 00:17:56.211 "max_subsystems": 1024 00:17:56.211 } 00:17:56.211 }, 00:17:56.211 { 00:17:56.211 "method": "nvmf_set_crdt", 00:17:56.211 "params": { 00:17:56.211 "crdt1": 0, 00:17:56.211 "crdt2": 0, 00:17:56.211 "crdt3": 0 00:17:56.211 } 00:17:56.211 }, 00:17:56.211 { 00:17:56.211 "method": "nvmf_create_transport", 00:17:56.211 "params": { 00:17:56.211 "trtype": "TCP", 00:17:56.211 "max_queue_depth": 128, 00:17:56.211 "max_io_qpairs_per_ctrlr": 127, 00:17:56.211 "in_capsule_data_size": 4096, 00:17:56.211 "max_io_size": 131072, 00:17:56.211 "io_unit_size": 131072, 00:17:56.211 "max_aq_depth": 128, 00:17:56.211 "num_shared_buffers": 511, 00:17:56.211 "buf_cache_size": 4294967295, 00:17:56.211 "dif_insert_or_strip": false, 00:17:56.211 "zcopy": false, 00:17:56.211 "c2h_success": false, 00:17:56.211 "sock_priority": 0, 00:17:56.211 "abort_timeout_sec": 1, 00:17:56.211 "ack_timeout": 0, 00:17:56.211 "data_wr_pool_size": 0 00:17:56.211 } 00:17:56.211 }, 00:17:56.211 { 00:17:56.211 "method": "nvmf_create_subsystem", 00:17:56.211 "params": { 00:17:56.211 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.211 "allow_any_host": false, 00:17:56.211 "serial_number": "SPDK00000000000001", 00:17:56.211 "model_number": "SPDK bdev Controller", 00:17:56.211 "max_namespaces": 10, 00:17:56.211 "min_cntlid": 1, 00:17:56.211 "max_cntlid": 65519, 00:17:56.211 "ana_reporting": false 00:17:56.211 } 00:17:56.211 }, 00:17:56.211 { 00:17:56.211 "method": "nvmf_subsystem_add_host", 00:17:56.211 "params": { 00:17:56.211 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.212 "host": "nqn.2016-06.io.spdk:host1", 00:17:56.212 "psk": "/tmp/tmp.cDbVLeguM5" 00:17:56.212 } 00:17:56.212 }, 00:17:56.212 { 00:17:56.212 "method": "nvmf_subsystem_add_ns", 00:17:56.212 "params": { 00:17:56.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.212 "namespace": { 00:17:56.212 "nsid": 1, 00:17:56.212 "bdev_name": "malloc0", 00:17:56.212 "nguid": "F585ECDAB71249E9B162F50E9AC94B67", 00:17:56.212 "uuid": "f585ecda-b712-49e9-b162-f50e9ac94b67", 00:17:56.212 "no_auto_visible": false 00:17:56.212 } 00:17:56.212 } 00:17:56.212 }, 00:17:56.212 { 00:17:56.212 "method": "nvmf_subsystem_add_listener", 00:17:56.212 "params": { 00:17:56.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.212 "listen_address": { 00:17:56.212 "trtype": "TCP", 00:17:56.212 "adrfam": "IPv4", 00:17:56.212 "traddr": "10.0.0.2", 00:17:56.212 "trsvcid": "4420" 00:17:56.212 }, 00:17:56.212 "secure_channel": true 00:17:56.212 } 00:17:56.212 } 00:17:56.212 ] 00:17:56.212 } 00:17:56.212 ] 00:17:56.212 }' 00:17:56.212 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:56.212 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.212 16:10:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=807903 00:17:56.212 16:10:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:56.212 16:10:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 807903 00:17:56.212 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 807903 ']' 00:17:56.212 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.212 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.212 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.212 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.212 16:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.212 [2024-07-15 16:10:42.030607] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:17:56.212 [2024-07-15 16:10:42.030700] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.212 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.212 [2024-07-15 16:10:42.095135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.212 [2024-07-15 16:10:42.192513] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.212 [2024-07-15 16:10:42.192574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.212 [2024-07-15 16:10:42.192597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.212 [2024-07-15 16:10:42.192607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.212 [2024-07-15 16:10:42.192616] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.212 [2024-07-15 16:10:42.192695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.471 [2024-07-15 16:10:42.415180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.471 [2024-07-15 16:10:42.431142] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:56.471 [2024-07-15 16:10:42.447201] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:56.471 [2024-07-15 16:10:42.458142] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.038 16:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.038 16:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:57.038 16:10:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:57.038 16:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:57.038 16:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.038 16:10:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.038 16:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=808054 00:17:57.038 16:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 808054 /var/tmp/bdevperf.sock 00:17:57.038 16:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 808054 ']' 00:17:57.038 16:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:57.038 16:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.038 16:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.038 16:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:17:57.038 "subsystems": [ 00:17:57.038 { 00:17:57.038 "subsystem": "keyring", 00:17:57.038 "config": [] 00:17:57.038 }, 00:17:57.038 { 00:17:57.038 "subsystem": "iobuf", 00:17:57.038 "config": [ 00:17:57.038 { 00:17:57.038 "method": "iobuf_set_options", 00:17:57.038 "params": { 00:17:57.038 "small_pool_count": 8192, 00:17:57.038 "large_pool_count": 1024, 00:17:57.038 "small_bufsize": 8192, 00:17:57.038 "large_bufsize": 135168 00:17:57.038 } 00:17:57.038 } 00:17:57.038 ] 00:17:57.038 }, 00:17:57.038 { 00:17:57.038 "subsystem": "sock", 00:17:57.038 "config": [ 00:17:57.038 { 00:17:57.038 "method": "sock_set_default_impl", 00:17:57.038 "params": { 00:17:57.038 "impl_name": "posix" 00:17:57.038 } 00:17:57.038 }, 00:17:57.038 { 00:17:57.039 "method": "sock_impl_set_options", 00:17:57.039 "params": { 00:17:57.039 "impl_name": "ssl", 00:17:57.039 "recv_buf_size": 4096, 00:17:57.039 "send_buf_size": 4096, 00:17:57.039 "enable_recv_pipe": true, 00:17:57.039 "enable_quickack": false, 00:17:57.039 "enable_placement_id": 0, 00:17:57.039 "enable_zerocopy_send_server": true, 00:17:57.039 "enable_zerocopy_send_client": false, 00:17:57.039 "zerocopy_threshold": 0, 00:17:57.039 "tls_version": 0, 00:17:57.039 "enable_ktls": false 00:17:57.039 } 00:17:57.039 }, 00:17:57.039 { 00:17:57.039 "method": "sock_impl_set_options", 00:17:57.039 "params": { 00:17:57.039 "impl_name": "posix", 00:17:57.039 "recv_buf_size": 2097152, 00:17:57.039 "send_buf_size": 2097152, 00:17:57.039 "enable_recv_pipe": true, 00:17:57.039 "enable_quickack": false, 00:17:57.039 "enable_placement_id": 0, 00:17:57.039 "enable_zerocopy_send_server": true, 00:17:57.039 "enable_zerocopy_send_client": false, 00:17:57.039 "zerocopy_threshold": 0, 00:17:57.039 "tls_version": 0, 00:17:57.039 "enable_ktls": false 00:17:57.039 } 00:17:57.039 } 00:17:57.039 ] 00:17:57.039 }, 00:17:57.039 { 00:17:57.039 "subsystem": "vmd", 00:17:57.039 "config": [] 00:17:57.039 }, 00:17:57.039 { 00:17:57.039 "subsystem": "accel", 00:17:57.039 "config": [ 00:17:57.039 { 00:17:57.039 "method": "accel_set_options", 00:17:57.039 "params": { 00:17:57.039 "small_cache_size": 128, 00:17:57.039 "large_cache_size": 16, 00:17:57.039 "task_count": 2048, 00:17:57.039 "sequence_count": 2048, 00:17:57.039 "buf_count": 2048 00:17:57.039 } 00:17:57.039 } 00:17:57.039 ] 00:17:57.039 }, 00:17:57.039 { 00:17:57.039 "subsystem": "bdev", 00:17:57.039 "config": [ 00:17:57.039 { 00:17:57.039 "method": "bdev_set_options", 00:17:57.039 "params": { 00:17:57.039 "bdev_io_pool_size": 65535, 00:17:57.039 "bdev_io_cache_size": 256, 00:17:57.039 "bdev_auto_examine": true, 00:17:57.039 "iobuf_small_cache_size": 128, 00:17:57.039 "iobuf_large_cache_size": 16 00:17:57.039 } 00:17:57.039 }, 00:17:57.039 { 00:17:57.039 "method": "bdev_raid_set_options", 00:17:57.039 "params": { 00:17:57.039 "process_window_size_kb": 1024 00:17:57.039 } 00:17:57.039 }, 00:17:57.039 { 00:17:57.039 "method": "bdev_iscsi_set_options", 00:17:57.039 "params": { 00:17:57.039 "timeout_sec": 30 00:17:57.039 } 00:17:57.039 }, 00:17:57.039 { 00:17:57.039 "method": "bdev_nvme_set_options", 00:17:57.039 "params": { 00:17:57.039 "action_on_timeout": "none", 00:17:57.039 "timeout_us": 0, 00:17:57.039 "timeout_admin_us": 0, 00:17:57.039 "keep_alive_timeout_ms": 10000, 00:17:57.039 "arbitration_burst": 0, 00:17:57.039 "low_priority_weight": 0, 00:17:57.039 "medium_priority_weight": 0, 00:17:57.039 "high_priority_weight": 0, 00:17:57.039 "nvme_adminq_poll_period_us": 10000, 00:17:57.039 "nvme_ioq_poll_period_us": 0, 00:17:57.039 "io_queue_requests": 512, 00:17:57.039 "delay_cmd_submit": true, 00:17:57.039 "transport_retry_count": 4, 00:17:57.039 "bdev_retry_count": 3, 00:17:57.039 "transport_ack_timeout": 0, 00:17:57.039 "ctrlr_loss_timeout_sec": 0, 00:17:57.039 "reconnect_delay_sec": 0, 00:17:57.039 "fast_io_fail_timeout_sec": 0, 00:17:57.039 "disable_auto_failback": false, 00:17:57.039 "generate_uuids": false, 00:17:57.039 "transport_tos": 0, 00:17:57.039 "nvme_error_stat": false, 00:17:57.039 "rdma_srq_size": 0, 00:17:57.039 "io_path_stat": false, 00:17:57.039 "allow_accel_sequence": false, 00:17:57.039 "rdma_max_cq_size": 0, 00:17:57.039 "rdma_cm_event_timeout_ms": 0, 00:17:57.039 "dhchap_digests": [ 00:17:57.039 "sha256", 00:17:57.039 "sha384", 00:17:57.039 "sha512" 00:17:57.039 ], 00:17:57.039 "dhchap_dhgroups": [ 00:17:57.039 "null", 00:17:57.039 "ffdhe2048", 00:17:57.039 "ffdhe3072", 00:17:57.039 "ffdhe4096", 00:17:57.039 "ffdhe6144", 00:17:57.039 "ffdhe8192" 00:17:57.039 ] 00:17:57.039 } 00:17:57.039 }, 00:17:57.039 { 00:17:57.039 "method": "bdev_nvme_attach_controller", 00:17:57.039 "params": { 00:17:57.039 "name": "TLSTEST", 00:17:57.039 "trtype": "TCP", 00:17:57.039 "adrfam": "IPv4", 00:17:57.039 "traddr": "10.0.0.2", 00:17:57.039 "trsvcid": "4420", 00:17:57.039 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.039 "prchk_reftag": false, 00:17:57.039 "prchk_guard": false, 00:17:57.039 "ctrlr_loss_timeout_sec": 0, 00:17:57.039 "reconnect_delay_sec": 0, 00:17:57.039 "fast_io_fail_timeout_sec": 0, 00:17:57.039 "psk": "/tmp/tmp.cDbVLeguM5", 00:17:57.039 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.039 "hdgst": false, 00:17:57.039 "ddgst": false 00:17:57.039 } 00:17:57.039 }, 00:17:57.039 { 00:17:57.039 "method": "bdev_nvme_set_hotplug", 00:17:57.039 "params": { 00:17:57.039 "period_us": 100000, 00:17:57.039 "enable": false 00:17:57.039 } 00:17:57.039 }, 00:17:57.039 { 00:17:57.039 "method": "bdev_wait_for_examine" 00:17:57.039 } 00:17:57.039 ] 00:17:57.039 }, 00:17:57.039 { 00:17:57.039 "subsystem": "nbd", 00:17:57.039 "config": [] 00:17:57.039 } 00:17:57.039 ] 00:17:57.039 }' 00:17:57.039 16:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.039 16:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.039 16:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.040 [2024-07-15 16:10:43.030753] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:17:57.040 [2024-07-15 16:10:43.030844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid808054 ] 00:17:57.299 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.299 [2024-07-15 16:10:43.091604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.299 [2024-07-15 16:10:43.199148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.558 [2024-07-15 16:10:43.372276] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.558 [2024-07-15 16:10:43.372417] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:58.131 16:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.131 16:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:58.131 16:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:58.131 Running I/O for 10 seconds... 00:18:10.336 00:18:10.336 Latency(us) 00:18:10.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.336 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:10.336 Verification LBA range: start 0x0 length 0x2000 00:18:10.336 TLSTESTn1 : 10.03 3506.46 13.70 0.00 0.00 36429.67 10340.12 29903.83 00:18:10.336 =================================================================================================================== 00:18:10.336 Total : 3506.46 13.70 0.00 0.00 36429.67 10340.12 29903.83 00:18:10.336 0 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 808054 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 808054 ']' 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 808054 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 808054 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 808054' 00:18:10.336 killing process with pid 808054 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 808054 00:18:10.336 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.336 00:18:10.336 Latency(us) 00:18:10.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.336 =================================================================================================================== 00:18:10.336 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.336 [2024-07-15 16:10:54.174082] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 808054 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 807903 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 807903 ']' 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 807903 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 807903 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 807903' 00:18:10.336 killing process with pid 807903 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 807903 00:18:10.336 [2024-07-15 16:10:54.462145] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 807903 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=809387 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 809387 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 809387 ']' 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.336 16:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.336 [2024-07-15 16:10:54.788240] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:18:10.336 [2024-07-15 16:10:54.788335] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.336 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.336 [2024-07-15 16:10:54.850446] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.336 [2024-07-15 16:10:54.958574] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.336 [2024-07-15 16:10:54.958629] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.336 [2024-07-15 16:10:54.958652] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.336 [2024-07-15 16:10:54.958663] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.336 [2024-07-15 16:10:54.958672] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.336 [2024-07-15 16:10:54.958704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.336 16:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.336 16:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:10.336 16:10:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:10.336 16:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:10.336 16:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.336 16:10:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.336 16:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.cDbVLeguM5 00:18:10.336 16:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.cDbVLeguM5 00:18:10.336 16:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:10.336 [2024-07-15 16:10:55.367825] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.336 16:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:10.336 16:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:10.336 [2024-07-15 16:10:55.945321] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:10.336 [2024-07-15 16:10:55.945551] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.336 16:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:10.336 malloc0 00:18:10.336 16:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:10.594 16:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cDbVLeguM5 00:18:10.853 [2024-07-15 16:10:56.781843] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:10.853 16:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=809671 00:18:10.853 16:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:10.853 16:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:10.853 16:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 809671 /var/tmp/bdevperf.sock 00:18:10.853 16:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 809671 ']' 00:18:10.853 16:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.853 16:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.853 16:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.853 16:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.853 16:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.853 [2024-07-15 16:10:56.838524] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:18:10.853 [2024-07-15 16:10:56.838606] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid809671 ] 00:18:11.112 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.112 [2024-07-15 16:10:56.899277] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.112 [2024-07-15 16:10:57.008365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.370 16:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.370 16:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:11.370 16:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cDbVLeguM5 00:18:11.370 16:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:11.629 [2024-07-15 16:10:57.592866] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:11.888 nvme0n1 00:18:11.888 16:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:11.888 Running I/O for 1 seconds... 00:18:12.822 00:18:12.822 Latency(us) 00:18:12.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.822 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:12.822 Verification LBA range: start 0x0 length 0x2000 00:18:12.822 nvme0n1 : 1.02 3122.31 12.20 0.00 0.00 40628.37 8641.04 39418.69 00:18:12.822 =================================================================================================================== 00:18:12.822 Total : 3122.31 12.20 0.00 0.00 40628.37 8641.04 39418.69 00:18:12.822 0 00:18:12.822 16:10:58 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 809671 00:18:12.822 16:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 809671 ']' 00:18:12.822 16:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 809671 00:18:12.822 16:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:12.822 16:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.822 16:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 809671 00:18:13.081 16:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:13.081 16:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:13.081 16:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 809671' 00:18:13.081 killing process with pid 809671 00:18:13.081 16:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 809671 00:18:13.081 Received shutdown signal, test time was about 1.000000 seconds 00:18:13.081 00:18:13.081 Latency(us) 00:18:13.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.081 =================================================================================================================== 00:18:13.081 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.081 16:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 809671 00:18:13.340 16:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 809387 00:18:13.340 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 809387 ']' 00:18:13.340 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 809387 00:18:13.340 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:13.340 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.340 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 809387 00:18:13.340 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:13.340 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:13.340 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 809387' 00:18:13.340 killing process with pid 809387 00:18:13.340 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 809387 00:18:13.340 [2024-07-15 16:10:59.135121] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:13.340 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 809387 00:18:13.598 16:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:18:13.598 16:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:13.598 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:13.598 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.598 16:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=809995 00:18:13.598 16:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:13.598 16:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 809995 00:18:13.598 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 809995 ']' 00:18:13.598 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.598 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.598 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.599 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.599 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.599 [2024-07-15 16:10:59.459606] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:18:13.599 [2024-07-15 16:10:59.459696] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.599 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.599 [2024-07-15 16:10:59.524361] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.857 [2024-07-15 16:10:59.627213] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.857 [2024-07-15 16:10:59.627277] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.857 [2024-07-15 16:10:59.627300] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.857 [2024-07-15 16:10:59.627312] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.857 [2024-07-15 16:10:59.627321] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.857 [2024-07-15 16:10:59.627348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.857 [2024-07-15 16:10:59.751276] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.857 malloc0 00:18:13.857 [2024-07-15 16:10:59.781499] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:13.857 [2024-07-15 16:10:59.781729] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=810094 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 810094 /var/tmp/bdevperf.sock 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 810094 ']' 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.857 16:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.857 [2024-07-15 16:10:59.849224] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:18:13.857 [2024-07-15 16:10:59.849306] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810094 ] 00:18:14.115 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.115 [2024-07-15 16:10:59.908217] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.115 [2024-07-15 16:11:00.017479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.115 16:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.115 16:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:14.115 16:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cDbVLeguM5 00:18:14.373 16:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:14.633 [2024-07-15 16:11:00.595724] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.892 nvme0n1 00:18:14.892 16:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:14.892 Running I/O for 1 seconds... 00:18:15.831 00:18:15.831 Latency(us) 00:18:15.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.831 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:15.831 Verification LBA range: start 0x0 length 0x2000 00:18:15.831 nvme0n1 : 1.02 3620.45 14.14 0.00 0.00 35005.03 6165.24 27767.85 00:18:15.831 =================================================================================================================== 00:18:15.831 Total : 3620.45 14.14 0.00 0.00 35005.03 6165.24 27767.85 00:18:15.831 0 00:18:15.831 16:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:15.831 16:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.831 16:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.090 16:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.090 16:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:18:16.090 "subsystems": [ 00:18:16.090 { 00:18:16.090 "subsystem": "keyring", 00:18:16.090 "config": [ 00:18:16.090 { 00:18:16.090 "method": "keyring_file_add_key", 00:18:16.090 "params": { 00:18:16.090 "name": "key0", 00:18:16.090 "path": "/tmp/tmp.cDbVLeguM5" 00:18:16.090 } 00:18:16.090 } 00:18:16.090 ] 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "subsystem": "iobuf", 00:18:16.090 "config": [ 00:18:16.090 { 00:18:16.090 "method": "iobuf_set_options", 00:18:16.090 "params": { 00:18:16.090 "small_pool_count": 8192, 00:18:16.090 "large_pool_count": 1024, 00:18:16.090 "small_bufsize": 8192, 00:18:16.090 "large_bufsize": 135168 00:18:16.090 } 00:18:16.090 } 00:18:16.090 ] 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "subsystem": "sock", 00:18:16.090 "config": [ 00:18:16.090 { 00:18:16.090 "method": "sock_set_default_impl", 00:18:16.090 "params": { 00:18:16.090 "impl_name": "posix" 00:18:16.090 } 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "method": "sock_impl_set_options", 00:18:16.090 "params": { 00:18:16.090 "impl_name": "ssl", 00:18:16.090 "recv_buf_size": 4096, 00:18:16.090 "send_buf_size": 4096, 00:18:16.090 "enable_recv_pipe": true, 00:18:16.090 "enable_quickack": false, 00:18:16.090 "enable_placement_id": 0, 00:18:16.090 "enable_zerocopy_send_server": true, 00:18:16.090 "enable_zerocopy_send_client": false, 00:18:16.090 "zerocopy_threshold": 0, 00:18:16.090 "tls_version": 0, 00:18:16.090 "enable_ktls": false 00:18:16.090 } 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "method": "sock_impl_set_options", 00:18:16.090 "params": { 00:18:16.090 "impl_name": "posix", 00:18:16.090 "recv_buf_size": 2097152, 00:18:16.090 "send_buf_size": 2097152, 00:18:16.090 "enable_recv_pipe": true, 00:18:16.090 "enable_quickack": false, 00:18:16.090 "enable_placement_id": 0, 00:18:16.090 "enable_zerocopy_send_server": true, 00:18:16.090 "enable_zerocopy_send_client": false, 00:18:16.090 "zerocopy_threshold": 0, 00:18:16.090 "tls_version": 0, 00:18:16.090 "enable_ktls": false 00:18:16.090 } 00:18:16.090 } 00:18:16.090 ] 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "subsystem": "vmd", 00:18:16.090 "config": [] 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "subsystem": "accel", 00:18:16.090 "config": [ 00:18:16.090 { 00:18:16.090 "method": "accel_set_options", 00:18:16.090 "params": { 00:18:16.090 "small_cache_size": 128, 00:18:16.090 "large_cache_size": 16, 00:18:16.090 "task_count": 2048, 00:18:16.090 "sequence_count": 2048, 00:18:16.090 "buf_count": 2048 00:18:16.090 } 00:18:16.090 } 00:18:16.090 ] 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "subsystem": "bdev", 00:18:16.090 "config": [ 00:18:16.090 { 00:18:16.090 "method": "bdev_set_options", 00:18:16.090 "params": { 00:18:16.090 "bdev_io_pool_size": 65535, 00:18:16.090 "bdev_io_cache_size": 256, 00:18:16.090 "bdev_auto_examine": true, 00:18:16.090 "iobuf_small_cache_size": 128, 00:18:16.090 "iobuf_large_cache_size": 16 00:18:16.090 } 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "method": "bdev_raid_set_options", 00:18:16.090 "params": { 00:18:16.090 "process_window_size_kb": 1024 00:18:16.090 } 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "method": "bdev_iscsi_set_options", 00:18:16.090 "params": { 00:18:16.090 "timeout_sec": 30 00:18:16.090 } 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "method": "bdev_nvme_set_options", 00:18:16.090 "params": { 00:18:16.090 "action_on_timeout": "none", 00:18:16.090 "timeout_us": 0, 00:18:16.090 "timeout_admin_us": 0, 00:18:16.090 "keep_alive_timeout_ms": 10000, 00:18:16.090 "arbitration_burst": 0, 00:18:16.090 "low_priority_weight": 0, 00:18:16.090 "medium_priority_weight": 0, 00:18:16.090 "high_priority_weight": 0, 00:18:16.090 "nvme_adminq_poll_period_us": 10000, 00:18:16.090 "nvme_ioq_poll_period_us": 0, 00:18:16.090 "io_queue_requests": 0, 00:18:16.090 "delay_cmd_submit": true, 00:18:16.090 "transport_retry_count": 4, 00:18:16.090 "bdev_retry_count": 3, 00:18:16.090 "transport_ack_timeout": 0, 00:18:16.090 "ctrlr_loss_timeout_sec": 0, 00:18:16.090 "reconnect_delay_sec": 0, 00:18:16.090 "fast_io_fail_timeout_sec": 0, 00:18:16.090 "disable_auto_failback": false, 00:18:16.090 "generate_uuids": false, 00:18:16.090 "transport_tos": 0, 00:18:16.090 "nvme_error_stat": false, 00:18:16.090 "rdma_srq_size": 0, 00:18:16.090 "io_path_stat": false, 00:18:16.090 "allow_accel_sequence": false, 00:18:16.090 "rdma_max_cq_size": 0, 00:18:16.090 "rdma_cm_event_timeout_ms": 0, 00:18:16.090 "dhchap_digests": [ 00:18:16.090 "sha256", 00:18:16.090 "sha384", 00:18:16.090 "sha512" 00:18:16.090 ], 00:18:16.090 "dhchap_dhgroups": [ 00:18:16.090 "null", 00:18:16.090 "ffdhe2048", 00:18:16.090 "ffdhe3072", 00:18:16.090 "ffdhe4096", 00:18:16.090 "ffdhe6144", 00:18:16.090 "ffdhe8192" 00:18:16.090 ] 00:18:16.090 } 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "method": "bdev_nvme_set_hotplug", 00:18:16.090 "params": { 00:18:16.090 "period_us": 100000, 00:18:16.090 "enable": false 00:18:16.090 } 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "method": "bdev_malloc_create", 00:18:16.090 "params": { 00:18:16.090 "name": "malloc0", 00:18:16.090 "num_blocks": 8192, 00:18:16.090 "block_size": 4096, 00:18:16.090 "physical_block_size": 4096, 00:18:16.090 "uuid": "d0fcf284-df31-4f73-a618-5462be296acb", 00:18:16.090 "optimal_io_boundary": 0 00:18:16.090 } 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "method": "bdev_wait_for_examine" 00:18:16.090 } 00:18:16.090 ] 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "subsystem": "nbd", 00:18:16.090 "config": [] 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "subsystem": "scheduler", 00:18:16.090 "config": [ 00:18:16.090 { 00:18:16.090 "method": "framework_set_scheduler", 00:18:16.090 "params": { 00:18:16.090 "name": "static" 00:18:16.090 } 00:18:16.090 } 00:18:16.090 ] 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "subsystem": "nvmf", 00:18:16.090 "config": [ 00:18:16.090 { 00:18:16.090 "method": "nvmf_set_config", 00:18:16.090 "params": { 00:18:16.090 "discovery_filter": "match_any", 00:18:16.090 "admin_cmd_passthru": { 00:18:16.090 "identify_ctrlr": false 00:18:16.090 } 00:18:16.090 } 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "method": "nvmf_set_max_subsystems", 00:18:16.090 "params": { 00:18:16.090 "max_subsystems": 1024 00:18:16.090 } 00:18:16.090 }, 00:18:16.090 { 00:18:16.090 "method": "nvmf_set_crdt", 00:18:16.090 "params": { 00:18:16.091 "crdt1": 0, 00:18:16.091 "crdt2": 0, 00:18:16.091 "crdt3": 0 00:18:16.091 } 00:18:16.091 }, 00:18:16.091 { 00:18:16.091 "method": "nvmf_create_transport", 00:18:16.091 "params": { 00:18:16.091 "trtype": "TCP", 00:18:16.091 "max_queue_depth": 128, 00:18:16.091 "max_io_qpairs_per_ctrlr": 127, 00:18:16.091 "in_capsule_data_size": 4096, 00:18:16.091 "max_io_size": 131072, 00:18:16.091 "io_unit_size": 131072, 00:18:16.091 "max_aq_depth": 128, 00:18:16.091 "num_shared_buffers": 511, 00:18:16.091 "buf_cache_size": 4294967295, 00:18:16.091 "dif_insert_or_strip": false, 00:18:16.091 "zcopy": false, 00:18:16.091 "c2h_success": false, 00:18:16.091 "sock_priority": 0, 00:18:16.091 "abort_timeout_sec": 1, 00:18:16.091 "ack_timeout": 0, 00:18:16.091 "data_wr_pool_size": 0 00:18:16.091 } 00:18:16.091 }, 00:18:16.091 { 00:18:16.091 "method": "nvmf_create_subsystem", 00:18:16.091 "params": { 00:18:16.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.091 "allow_any_host": false, 00:18:16.091 "serial_number": "00000000000000000000", 00:18:16.091 "model_number": "SPDK bdev Controller", 00:18:16.091 "max_namespaces": 32, 00:18:16.091 "min_cntlid": 1, 00:18:16.091 "max_cntlid": 65519, 00:18:16.091 "ana_reporting": false 00:18:16.091 } 00:18:16.091 }, 00:18:16.091 { 00:18:16.091 "method": "nvmf_subsystem_add_host", 00:18:16.091 "params": { 00:18:16.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.091 "host": "nqn.2016-06.io.spdk:host1", 00:18:16.091 "psk": "key0" 00:18:16.091 } 00:18:16.091 }, 00:18:16.091 { 00:18:16.091 "method": "nvmf_subsystem_add_ns", 00:18:16.091 "params": { 00:18:16.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.091 "namespace": { 00:18:16.091 "nsid": 1, 00:18:16.091 "bdev_name": "malloc0", 00:18:16.091 "nguid": "D0FCF284DF314F73A6185462BE296ACB", 00:18:16.091 "uuid": "d0fcf284-df31-4f73-a618-5462be296acb", 00:18:16.091 "no_auto_visible": false 00:18:16.091 } 00:18:16.091 } 00:18:16.091 }, 00:18:16.091 { 00:18:16.091 "method": "nvmf_subsystem_add_listener", 00:18:16.091 "params": { 00:18:16.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.091 "listen_address": { 00:18:16.091 "trtype": "TCP", 00:18:16.091 "adrfam": "IPv4", 00:18:16.091 "traddr": "10.0.0.2", 00:18:16.091 "trsvcid": "4420" 00:18:16.091 }, 00:18:16.091 "secure_channel": true 00:18:16.091 } 00:18:16.091 } 00:18:16.091 ] 00:18:16.091 } 00:18:16.091 ] 00:18:16.091 }' 00:18:16.091 16:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:16.350 16:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:18:16.350 "subsystems": [ 00:18:16.350 { 00:18:16.350 "subsystem": "keyring", 00:18:16.350 "config": [ 00:18:16.350 { 00:18:16.350 "method": "keyring_file_add_key", 00:18:16.350 "params": { 00:18:16.350 "name": "key0", 00:18:16.350 "path": "/tmp/tmp.cDbVLeguM5" 00:18:16.350 } 00:18:16.350 } 00:18:16.350 ] 00:18:16.350 }, 00:18:16.350 { 00:18:16.350 "subsystem": "iobuf", 00:18:16.350 "config": [ 00:18:16.350 { 00:18:16.350 "method": "iobuf_set_options", 00:18:16.350 "params": { 00:18:16.350 "small_pool_count": 8192, 00:18:16.350 "large_pool_count": 1024, 00:18:16.350 "small_bufsize": 8192, 00:18:16.350 "large_bufsize": 135168 00:18:16.350 } 00:18:16.350 } 00:18:16.350 ] 00:18:16.350 }, 00:18:16.350 { 00:18:16.350 "subsystem": "sock", 00:18:16.350 "config": [ 00:18:16.350 { 00:18:16.350 "method": "sock_set_default_impl", 00:18:16.350 "params": { 00:18:16.350 "impl_name": "posix" 00:18:16.350 } 00:18:16.350 }, 00:18:16.350 { 00:18:16.350 "method": "sock_impl_set_options", 00:18:16.350 "params": { 00:18:16.350 "impl_name": "ssl", 00:18:16.350 "recv_buf_size": 4096, 00:18:16.350 "send_buf_size": 4096, 00:18:16.350 "enable_recv_pipe": true, 00:18:16.350 "enable_quickack": false, 00:18:16.350 "enable_placement_id": 0, 00:18:16.350 "enable_zerocopy_send_server": true, 00:18:16.350 "enable_zerocopy_send_client": false, 00:18:16.350 "zerocopy_threshold": 0, 00:18:16.350 "tls_version": 0, 00:18:16.350 "enable_ktls": false 00:18:16.350 } 00:18:16.350 }, 00:18:16.350 { 00:18:16.350 "method": "sock_impl_set_options", 00:18:16.350 "params": { 00:18:16.350 "impl_name": "posix", 00:18:16.350 "recv_buf_size": 2097152, 00:18:16.350 "send_buf_size": 2097152, 00:18:16.350 "enable_recv_pipe": true, 00:18:16.350 "enable_quickack": false, 00:18:16.350 "enable_placement_id": 0, 00:18:16.350 "enable_zerocopy_send_server": true, 00:18:16.350 "enable_zerocopy_send_client": false, 00:18:16.350 "zerocopy_threshold": 0, 00:18:16.350 "tls_version": 0, 00:18:16.350 "enable_ktls": false 00:18:16.350 } 00:18:16.350 } 00:18:16.350 ] 00:18:16.350 }, 00:18:16.350 { 00:18:16.350 "subsystem": "vmd", 00:18:16.350 "config": [] 00:18:16.350 }, 00:18:16.350 { 00:18:16.351 "subsystem": "accel", 00:18:16.351 "config": [ 00:18:16.351 { 00:18:16.351 "method": "accel_set_options", 00:18:16.351 "params": { 00:18:16.351 "small_cache_size": 128, 00:18:16.351 "large_cache_size": 16, 00:18:16.351 "task_count": 2048, 00:18:16.351 "sequence_count": 2048, 00:18:16.351 "buf_count": 2048 00:18:16.351 } 00:18:16.351 } 00:18:16.351 ] 00:18:16.351 }, 00:18:16.351 { 00:18:16.351 "subsystem": "bdev", 00:18:16.351 "config": [ 00:18:16.351 { 00:18:16.351 "method": "bdev_set_options", 00:18:16.351 "params": { 00:18:16.351 "bdev_io_pool_size": 65535, 00:18:16.351 "bdev_io_cache_size": 256, 00:18:16.351 "bdev_auto_examine": true, 00:18:16.351 "iobuf_small_cache_size": 128, 00:18:16.351 "iobuf_large_cache_size": 16 00:18:16.351 } 00:18:16.351 }, 00:18:16.351 { 00:18:16.351 "method": "bdev_raid_set_options", 00:18:16.351 "params": { 00:18:16.351 "process_window_size_kb": 1024 00:18:16.351 } 00:18:16.351 }, 00:18:16.351 { 00:18:16.351 "method": "bdev_iscsi_set_options", 00:18:16.351 "params": { 00:18:16.351 "timeout_sec": 30 00:18:16.351 } 00:18:16.351 }, 00:18:16.351 { 00:18:16.351 "method": "bdev_nvme_set_options", 00:18:16.351 "params": { 00:18:16.351 "action_on_timeout": "none", 00:18:16.351 "timeout_us": 0, 00:18:16.351 "timeout_admin_us": 0, 00:18:16.351 "keep_alive_timeout_ms": 10000, 00:18:16.351 "arbitration_burst": 0, 00:18:16.351 "low_priority_weight": 0, 00:18:16.351 "medium_priority_weight": 0, 00:18:16.351 "high_priority_weight": 0, 00:18:16.351 "nvme_adminq_poll_period_us": 10000, 00:18:16.351 "nvme_ioq_poll_period_us": 0, 00:18:16.351 "io_queue_requests": 512, 00:18:16.351 "delay_cmd_submit": true, 00:18:16.351 "transport_retry_count": 4, 00:18:16.351 "bdev_retry_count": 3, 00:18:16.351 "transport_ack_timeout": 0, 00:18:16.351 "ctrlr_loss_timeout_sec": 0, 00:18:16.351 "reconnect_delay_sec": 0, 00:18:16.351 "fast_io_fail_timeout_sec": 0, 00:18:16.351 "disable_auto_failback": false, 00:18:16.351 "generate_uuids": false, 00:18:16.351 "transport_tos": 0, 00:18:16.351 "nvme_error_stat": false, 00:18:16.351 "rdma_srq_size": 0, 00:18:16.351 "io_path_stat": false, 00:18:16.351 "allow_accel_sequence": false, 00:18:16.351 "rdma_max_cq_size": 0, 00:18:16.351 "rdma_cm_event_timeout_ms": 0, 00:18:16.351 "dhchap_digests": [ 00:18:16.351 "sha256", 00:18:16.351 "sha384", 00:18:16.351 "sha512" 00:18:16.351 ], 00:18:16.351 "dhchap_dhgroups": [ 00:18:16.351 "null", 00:18:16.351 "ffdhe2048", 00:18:16.351 "ffdhe3072", 00:18:16.351 "ffdhe4096", 00:18:16.351 "ffdhe6144", 00:18:16.351 "ffdhe8192" 00:18:16.351 ] 00:18:16.351 } 00:18:16.351 }, 00:18:16.351 { 00:18:16.351 "method": "bdev_nvme_attach_controller", 00:18:16.351 "params": { 00:18:16.351 "name": "nvme0", 00:18:16.351 "trtype": "TCP", 00:18:16.351 "adrfam": "IPv4", 00:18:16.351 "traddr": "10.0.0.2", 00:18:16.351 "trsvcid": "4420", 00:18:16.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.351 "prchk_reftag": false, 00:18:16.351 "prchk_guard": false, 00:18:16.351 "ctrlr_loss_timeout_sec": 0, 00:18:16.351 "reconnect_delay_sec": 0, 00:18:16.351 "fast_io_fail_timeout_sec": 0, 00:18:16.351 "psk": "key0", 00:18:16.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:16.351 "hdgst": false, 00:18:16.351 "ddgst": false 00:18:16.351 } 00:18:16.351 }, 00:18:16.351 { 00:18:16.351 "method": "bdev_nvme_set_hotplug", 00:18:16.351 "params": { 00:18:16.351 "period_us": 100000, 00:18:16.351 "enable": false 00:18:16.351 } 00:18:16.351 }, 00:18:16.351 { 00:18:16.351 "method": "bdev_enable_histogram", 00:18:16.351 "params": { 00:18:16.351 "name": "nvme0n1", 00:18:16.351 "enable": true 00:18:16.351 } 00:18:16.351 }, 00:18:16.351 { 00:18:16.351 "method": "bdev_wait_for_examine" 00:18:16.351 } 00:18:16.351 ] 00:18:16.351 }, 00:18:16.351 { 00:18:16.351 "subsystem": "nbd", 00:18:16.351 "config": [] 00:18:16.351 } 00:18:16.351 ] 00:18:16.351 }' 00:18:16.351 16:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 810094 00:18:16.351 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 810094 ']' 00:18:16.351 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 810094 00:18:16.351 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:16.351 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:16.351 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 810094 00:18:16.351 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:16.351 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:16.351 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 810094' 00:18:16.351 killing process with pid 810094 00:18:16.351 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 810094 00:18:16.351 Received shutdown signal, test time was about 1.000000 seconds 00:18:16.351 00:18:16.351 Latency(us) 00:18:16.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.351 =================================================================================================================== 00:18:16.351 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.351 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 810094 00:18:16.611 16:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 809995 00:18:16.611 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 809995 ']' 00:18:16.611 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 809995 00:18:16.611 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:16.611 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:16.611 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 809995 00:18:16.611 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:16.611 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:16.611 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 809995' 00:18:16.611 killing process with pid 809995 00:18:16.611 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 809995 00:18:16.611 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 809995 00:18:17.178 16:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:17.178 16:11:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:17.178 16:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:18:17.178 "subsystems": [ 00:18:17.178 { 00:18:17.178 "subsystem": "keyring", 00:18:17.178 "config": [ 00:18:17.178 { 00:18:17.178 "method": "keyring_file_add_key", 00:18:17.178 "params": { 00:18:17.178 "name": "key0", 00:18:17.178 "path": "/tmp/tmp.cDbVLeguM5" 00:18:17.178 } 00:18:17.178 } 00:18:17.178 ] 00:18:17.178 }, 00:18:17.178 { 00:18:17.178 "subsystem": "iobuf", 00:18:17.178 "config": [ 00:18:17.178 { 00:18:17.178 "method": "iobuf_set_options", 00:18:17.178 "params": { 00:18:17.178 "small_pool_count": 8192, 00:18:17.178 "large_pool_count": 1024, 00:18:17.178 "small_bufsize": 8192, 00:18:17.178 "large_bufsize": 135168 00:18:17.178 } 00:18:17.178 } 00:18:17.178 ] 00:18:17.178 }, 00:18:17.178 { 00:18:17.178 "subsystem": "sock", 00:18:17.178 "config": [ 00:18:17.178 { 00:18:17.178 "method": "sock_set_default_impl", 00:18:17.178 "params": { 00:18:17.178 "impl_name": "posix" 00:18:17.178 } 00:18:17.178 }, 00:18:17.178 { 00:18:17.178 "method": "sock_impl_set_options", 00:18:17.178 "params": { 00:18:17.178 "impl_name": "ssl", 00:18:17.178 "recv_buf_size": 4096, 00:18:17.178 "send_buf_size": 4096, 00:18:17.178 "enable_recv_pipe": true, 00:18:17.178 "enable_quickack": false, 00:18:17.178 "enable_placement_id": 0, 00:18:17.178 "enable_zerocopy_send_server": true, 00:18:17.178 "enable_zerocopy_send_client": false, 00:18:17.178 "zerocopy_threshold": 0, 00:18:17.178 "tls_version": 0, 00:18:17.178 "enable_ktls": false 00:18:17.178 } 00:18:17.178 }, 00:18:17.178 { 00:18:17.178 "method": "sock_impl_set_options", 00:18:17.178 "params": { 00:18:17.178 "impl_name": "posix", 00:18:17.178 "recv_buf_size": 2097152, 00:18:17.178 "send_buf_size": 2097152, 00:18:17.178 "enable_recv_pipe": true, 00:18:17.178 "enable_quickack": false, 00:18:17.178 "enable_placement_id": 0, 00:18:17.178 "enable_zerocopy_send_server": true, 00:18:17.178 "enable_zerocopy_send_client": false, 00:18:17.178 "zerocopy_threshold": 0, 00:18:17.178 "tls_version": 0, 00:18:17.178 "enable_ktls": false 00:18:17.178 } 00:18:17.178 } 00:18:17.178 ] 00:18:17.178 }, 00:18:17.179 { 00:18:17.179 "subsystem": "vmd", 00:18:17.179 "config": [] 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "subsystem": "accel", 00:18:17.179 "config": [ 00:18:17.179 { 00:18:17.179 "method": "accel_set_options", 00:18:17.179 "params": { 00:18:17.179 "small_cache_size": 128, 00:18:17.179 "large_cache_size": 16, 00:18:17.179 "task_count": 2048, 00:18:17.179 "sequence_count": 2048, 00:18:17.179 "buf_count": 2048 00:18:17.179 } 00:18:17.179 } 00:18:17.179 ] 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "subsystem": "bdev", 00:18:17.179 "config": [ 00:18:17.179 { 00:18:17.179 "method": "bdev_set_options", 00:18:17.179 "params": { 00:18:17.179 "bdev_io_pool_size": 65535, 00:18:17.179 "bdev_io_cache_size": 256, 00:18:17.179 "bdev_auto_examine": true, 00:18:17.179 "iobuf_small_cache_size": 128, 00:18:17.179 "iobuf_large_cache_size": 16 00:18:17.179 } 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "method": "bdev_raid_set_options", 00:18:17.179 "params": { 00:18:17.179 "process_window_size_kb": 1024 00:18:17.179 } 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "method": "bdev_iscsi_set_options", 00:18:17.179 "params": { 00:18:17.179 "timeout_sec": 30 00:18:17.179 } 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "method": "bdev_nvme_set_options", 00:18:17.179 "params": { 00:18:17.179 "action_on_timeout": "none", 00:18:17.179 "timeout_us": 0, 00:18:17.179 "timeout_admin_us": 0, 00:18:17.179 "keep_alive_timeout_ms": 10000, 00:18:17.179 "arbitration_burst": 0, 00:18:17.179 "low_priority_weight": 0, 00:18:17.179 "medium_priority_weight": 0, 00:18:17.179 "high_priority_weight": 0, 00:18:17.179 "nvme_adminq_poll_period_us": 10000, 00:18:17.179 "nvme_ioq_poll_period_us": 0, 00:18:17.179 "io_queue_requests": 0, 00:18:17.179 "delay_cmd_submit": true, 00:18:17.179 "transport_retry_count": 4, 00:18:17.179 "bdev_retry_count": 3, 00:18:17.179 "transport_ack_timeout": 0, 00:18:17.179 "ctrlr_loss_timeout_sec": 0, 00:18:17.179 "reconnect_delay_sec": 0, 00:18:17.179 "fast_io_fail_timeout_sec": 0, 00:18:17.179 "disable_auto_failback": false, 00:18:17.179 "generate_uuids": false, 00:18:17.179 "transport_tos": 0, 00:18:17.179 "nvme_error_stat": false, 00:18:17.179 "rdma_srq_size": 0, 00:18:17.179 "io_path_stat": false, 00:18:17.179 "allow_accel_sequence": false, 00:18:17.179 "rdma_max_cq_size": 0, 00:18:17.179 "rdma_cm_event_timeout_ms": 0, 00:18:17.179 "dhchap_digests": [ 00:18:17.179 "sha256", 00:18:17.179 "sha384", 00:18:17.179 "sha512" 00:18:17.179 ], 00:18:17.179 "dhchap_dhgroups": [ 00:18:17.179 "null", 00:18:17.179 "ffdhe2048", 00:18:17.179 "ffdhe3072", 00:18:17.179 "ffdhe4096", 00:18:17.179 "ffdhe6144", 00:18:17.179 "ffdhe8192" 00:18:17.179 ] 00:18:17.179 } 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "method": "bdev_nvme_set_hotplug", 00:18:17.179 "params": { 00:18:17.179 "period_us": 100000, 00:18:17.179 "enable": false 00:18:17.179 } 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "method": "bdev_malloc_create", 00:18:17.179 "params": { 00:18:17.179 "name": "malloc0", 00:18:17.179 "num_blocks": 8192, 00:18:17.179 "block_size": 4096, 00:18:17.179 "physical_block_size": 4096, 00:18:17.179 "uuid": "d0fcf284-df31-4f73-a618-5462be296acb", 00:18:17.179 "optimal_io_boundary": 0 00:18:17.179 } 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "method": "bdev_wait_for_examine" 00:18:17.179 } 00:18:17.179 ] 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "subsystem": "nbd", 00:18:17.179 "config": [] 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "subsystem": "scheduler", 00:18:17.179 "config": [ 00:18:17.179 { 00:18:17.179 "method": "framework_set_scheduler", 00:18:17.179 "params": { 00:18:17.179 "name": "static" 00:18:17.179 } 00:18:17.179 } 00:18:17.179 ] 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "subsystem": "nvmf", 00:18:17.179 "config": [ 00:18:17.179 { 00:18:17.179 "method": "nvmf_set_config", 00:18:17.179 "params": { 00:18:17.179 "discovery_filter": "match_any", 00:18:17.179 "admin_cmd_passthru": { 00:18:17.179 "identify_ctrlr": false 00:18:17.179 } 00:18:17.179 } 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "method": "nvmf_set_max_subsystems", 00:18:17.179 "params": { 00:18:17.179 "max_subsystems": 1024 00:18:17.179 } 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "method": "nvmf_set_crdt", 00:18:17.179 "params": { 00:18:17.179 "crdt1": 0, 00:18:17.179 "crdt2": 0, 00:18:17.179 "crdt3": 0 00:18:17.179 } 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "method": "nvmf_create_transport", 00:18:17.179 "params": { 00:18:17.179 "trtype": "TCP", 00:18:17.179 "max_queue_depth": 128, 00:18:17.179 "max_io_qpairs_per_ctrlr": 127, 00:18:17.179 "in_capsule_data_size": 4096, 00:18:17.179 "max_io_size": 131072, 00:18:17.179 "io_unit_size": 131072, 00:18:17.179 "max_aq_depth": 128, 00:18:17.179 "num_shared_buffers": 511, 00:18:17.179 "buf_cache_size": 4294967295, 00:18:17.179 "dif_insert_or_strip": false, 00:18:17.179 "zcopy": false, 00:18:17.179 "c2h_success": false, 00:18:17.179 "sock_priority": 0, 00:18:17.179 "abort_timeout_sec": 1, 00:18:17.179 "ack_timeout": 0, 00:18:17.179 "data_wr_pool_size": 0 00:18:17.179 } 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "method": "nvmf_create_subsystem", 00:18:17.179 "params": { 00:18:17.179 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.179 "allow_any_host": false, 00:18:17.179 "serial_number": "00000000000000000000", 00:18:17.179 "model_number": "SPDK bdev Controller", 00:18:17.179 "max_namespaces": 32, 00:18:17.179 "min_cntlid": 1, 00:18:17.179 "max_cntlid": 65519, 00:18:17.179 "ana_reporting": false 00:18:17.179 } 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "method": "nvmf_subsystem_add_host", 00:18:17.179 "params": { 00:18:17.179 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.179 "host": "nqn.2016-06.io.spdk:host1", 00:18:17.179 "psk": "key0" 00:18:17.179 } 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "method": "nvmf_subsystem_add_ns", 00:18:17.179 "params": { 00:18:17.179 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.179 "namespace": { 00:18:17.179 "nsid": 1, 00:18:17.179 "bdev_name": "malloc0", 00:18:17.179 "nguid": "D0FCF284DF314F73A6185462BE296ACB", 00:18:17.179 "uuid": "d0fcf284-df31-4f73-a618-5462be296acb", 00:18:17.179 "no_auto_visible": false 00:18:17.179 } 00:18:17.179 } 00:18:17.179 }, 00:18:17.179 { 00:18:17.179 "method": "nvmf_subsystem_add_listener", 00:18:17.179 "params": { 00:18:17.179 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.179 "listen_address": { 00:18:17.179 "trtype": "TCP", 00:18:17.179 "adrfam": "IPv4", 00:18:17.179 "traddr": "10.0.0.2", 00:18:17.179 "trsvcid": "4420" 00:18:17.179 }, 00:18:17.179 "secure_channel": true 00:18:17.179 } 00:18:17.179 } 00:18:17.179 ] 00:18:17.179 } 00:18:17.179 ] 00:18:17.179 }' 00:18:17.179 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:17.179 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.179 16:11:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=810487 00:18:17.179 16:11:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:17.179 16:11:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 810487 00:18:17.179 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 810487 ']' 00:18:17.179 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.179 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.179 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.179 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.179 16:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.179 [2024-07-15 16:11:02.932216] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:18:17.179 [2024-07-15 16:11:02.932307] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.179 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.179 [2024-07-15 16:11:02.995480] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.179 [2024-07-15 16:11:03.097081] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.179 [2024-07-15 16:11:03.097145] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.179 [2024-07-15 16:11:03.097166] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.179 [2024-07-15 16:11:03.097176] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.179 [2024-07-15 16:11:03.097186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.179 [2024-07-15 16:11:03.097268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.440 [2024-07-15 16:11:03.319874] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.440 [2024-07-15 16:11:03.351905] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:17.440 [2024-07-15 16:11:03.363141] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.004 16:11:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.004 16:11:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:18.004 16:11:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:18.004 16:11:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:18.004 16:11:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 16:11:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.004 16:11:03 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=810539 00:18:18.004 16:11:03 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 810539 /var/tmp/bdevperf.sock 00:18:18.004 16:11:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 810539 ']' 00:18:18.004 16:11:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.004 16:11:03 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:18.004 16:11:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.004 16:11:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.004 16:11:03 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:18:18.004 "subsystems": [ 00:18:18.004 { 00:18:18.004 "subsystem": "keyring", 00:18:18.004 "config": [ 00:18:18.004 { 00:18:18.004 "method": "keyring_file_add_key", 00:18:18.004 "params": { 00:18:18.004 "name": "key0", 00:18:18.005 "path": "/tmp/tmp.cDbVLeguM5" 00:18:18.005 } 00:18:18.005 } 00:18:18.005 ] 00:18:18.005 }, 00:18:18.005 { 00:18:18.005 "subsystem": "iobuf", 00:18:18.005 "config": [ 00:18:18.005 { 00:18:18.005 "method": "iobuf_set_options", 00:18:18.005 "params": { 00:18:18.005 "small_pool_count": 8192, 00:18:18.005 "large_pool_count": 1024, 00:18:18.005 "small_bufsize": 8192, 00:18:18.005 "large_bufsize": 135168 00:18:18.005 } 00:18:18.005 } 00:18:18.005 ] 00:18:18.005 }, 00:18:18.005 { 00:18:18.005 "subsystem": "sock", 00:18:18.005 "config": [ 00:18:18.005 { 00:18:18.005 "method": "sock_set_default_impl", 00:18:18.005 "params": { 00:18:18.005 "impl_name": "posix" 00:18:18.005 } 00:18:18.005 }, 00:18:18.005 { 00:18:18.005 "method": "sock_impl_set_options", 00:18:18.005 "params": { 00:18:18.005 "impl_name": "ssl", 00:18:18.005 "recv_buf_size": 4096, 00:18:18.005 "send_buf_size": 4096, 00:18:18.005 "enable_recv_pipe": true, 00:18:18.005 "enable_quickack": false, 00:18:18.005 "enable_placement_id": 0, 00:18:18.005 "enable_zerocopy_send_server": true, 00:18:18.005 "enable_zerocopy_send_client": false, 00:18:18.005 "zerocopy_threshold": 0, 00:18:18.005 "tls_version": 0, 00:18:18.005 "enable_ktls": false 00:18:18.005 } 00:18:18.005 }, 00:18:18.005 { 00:18:18.005 "method": "sock_impl_set_options", 00:18:18.005 "params": { 00:18:18.005 "impl_name": "posix", 00:18:18.005 "recv_buf_size": 2097152, 00:18:18.005 "send_buf_size": 2097152, 00:18:18.005 "enable_recv_pipe": true, 00:18:18.005 "enable_quickack": false, 00:18:18.005 "enable_placement_id": 0, 00:18:18.005 "enable_zerocopy_send_server": true, 00:18:18.005 "enable_zerocopy_send_client": false, 00:18:18.005 "zerocopy_threshold": 0, 00:18:18.005 "tls_version": 0, 00:18:18.005 "enable_ktls": false 00:18:18.005 } 00:18:18.005 } 00:18:18.005 ] 00:18:18.005 }, 00:18:18.005 { 00:18:18.005 "subsystem": "vmd", 00:18:18.005 "config": [] 00:18:18.005 }, 00:18:18.005 { 00:18:18.005 "subsystem": "accel", 00:18:18.005 "config": [ 00:18:18.005 { 00:18:18.005 "method": "accel_set_options", 00:18:18.005 "params": { 00:18:18.005 "small_cache_size": 128, 00:18:18.005 "large_cache_size": 16, 00:18:18.005 "task_count": 2048, 00:18:18.005 "sequence_count": 2048, 00:18:18.005 "buf_count": 2048 00:18:18.005 } 00:18:18.005 } 00:18:18.005 ] 00:18:18.005 }, 00:18:18.005 { 00:18:18.005 "subsystem": "bdev", 00:18:18.005 "config": [ 00:18:18.005 { 00:18:18.005 "method": "bdev_set_options", 00:18:18.005 "params": { 00:18:18.005 "bdev_io_pool_size": 65535, 00:18:18.005 "bdev_io_cache_size": 256, 00:18:18.005 "bdev_auto_examine": true, 00:18:18.005 "iobuf_small_cache_size": 128, 00:18:18.005 "iobuf_large_cache_size": 16 00:18:18.005 } 00:18:18.005 }, 00:18:18.005 { 00:18:18.005 "method": "bdev_raid_set_options", 00:18:18.005 "params": { 00:18:18.005 "process_window_size_kb": 1024 00:18:18.005 } 00:18:18.005 }, 00:18:18.005 { 00:18:18.005 "method": "bdev_iscsi_set_options", 00:18:18.005 "params": { 00:18:18.005 "timeout_sec": 30 00:18:18.005 } 00:18:18.005 }, 00:18:18.005 { 00:18:18.005 "method": "bdev_nvme_set_options", 00:18:18.005 "params": { 00:18:18.005 "action_on_timeout": "none", 00:18:18.005 "timeout_us": 0, 00:18:18.005 "timeout_admin_us": 0, 00:18:18.005 "keep_alive_timeout_ms": 10000, 00:18:18.005 "arbitration_burst": 0, 00:18:18.005 "low_priority_weight": 0, 00:18:18.005 "medium_priority_weight": 0, 00:18:18.005 "high_priority_weight": 0, 00:18:18.005 "nvme_adminq_poll_period_us": 10000, 00:18:18.005 "nvme_ioq_poll_period_us": 0, 00:18:18.005 "io_queue_requests": 512, 00:18:18.005 "delay_cmd_submit": true, 00:18:18.005 "transport_retry_count": 4, 00:18:18.005 "bdev_retry_count": 3, 00:18:18.005 "transport_ack_timeout": 0, 00:18:18.005 "ctrlr_loss_timeout_sec": 0, 00:18:18.005 "reconnect_delay_sec": 0, 00:18:18.005 "fast_io_fail_timeout_sec": 0, 00:18:18.005 "disable_auto_failback": false, 00:18:18.005 "generate_uuids": false, 00:18:18.005 "transport_tos": 0, 00:18:18.005 "nvme_error_stat": false, 00:18:18.005 "rdma_srq_size": 0, 00:18:18.005 "io_path_stat": false, 00:18:18.005 "allow_accel_sequence": false, 00:18:18.005 "rdma_max_cq_size": 0, 00:18:18.005 "rdma_cm_event_timeout_ms": 0, 00:18:18.005 "dhchap_digests": [ 00:18:18.005 "sha256", 00:18:18.005 "sha384", 00:18:18.005 "shWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.005 a512" 00:18:18.005 ], 00:18:18.005 "dhchap_dhgroups": [ 00:18:18.005 "null", 00:18:18.005 "ffdhe2048", 00:18:18.005 "ffdhe3072", 00:18:18.005 "ffdhe4096", 00:18:18.005 "ffdhe6144", 00:18:18.005 "ffdhe8192" 00:18:18.005 ] 00:18:18.005 } 00:18:18.005 }, 00:18:18.005 { 00:18:18.005 "method": "bdev_nvme_attach_controller", 00:18:18.005 "params": { 00:18:18.005 "name": "nvme0", 00:18:18.005 "trtype": "TCP", 00:18:18.005 "adrfam": "IPv4", 00:18:18.005 "traddr": "10.0.0.2", 00:18:18.005 "trsvcid": "4420", 00:18:18.005 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.005 "prchk_reftag": false, 00:18:18.005 "prchk_guard": false, 00:18:18.005 "ctrlr_loss_timeout_sec": 0, 00:18:18.005 "reconnect_delay_sec": 0, 00:18:18.005 "fast_io_fail_timeout_sec": 0, 00:18:18.005 "psk": "key0", 00:18:18.005 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.005 "hdgst": false, 00:18:18.005 "ddgst": false 00:18:18.005 } 00:18:18.005 }, 00:18:18.005 { 00:18:18.005 "method": "bdev_nvme_set_hotplug", 00:18:18.005 "params": { 00:18:18.005 "period_us": 100000, 00:18:18.005 "enable": false 00:18:18.005 } 00:18:18.005 }, 00:18:18.005 { 00:18:18.005 "method": "bdev_enable_histogram", 00:18:18.005 "params": { 00:18:18.005 "name": "nvme0n1", 00:18:18.005 "enable": true 00:18:18.005 } 00:18:18.005 }, 00:18:18.005 { 00:18:18.005 "method": "bdev_wait_for_examine" 00:18:18.005 } 00:18:18.005 ] 00:18:18.005 }, 00:18:18.005 { 00:18:18.005 "subsystem": "nbd", 00:18:18.005 "config": [] 00:18:18.005 } 00:18:18.005 ] 00:18:18.005 }' 00:18:18.005 16:11:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.005 16:11:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.005 [2024-07-15 16:11:03.933228] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:18:18.005 [2024-07-15 16:11:03.933332] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810539 ] 00:18:18.005 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.005 [2024-07-15 16:11:03.993652] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.264 [2024-07-15 16:11:04.101670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.521 [2024-07-15 16:11:04.278685] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.087 16:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.087 16:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:19.087 16:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:19.087 16:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:19.346 16:11:05 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.346 16:11:05 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:19.346 Running I/O for 1 seconds... 00:18:20.723 00:18:20.723 Latency(us) 00:18:20.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.723 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:20.723 Verification LBA range: start 0x0 length 0x2000 00:18:20.723 nvme0n1 : 1.03 3540.11 13.83 0.00 0.00 35679.47 6019.60 39030.33 00:18:20.723 =================================================================================================================== 00:18:20.723 Total : 3540.11 13.83 0.00 0.00 35679.47 6019.60 39030.33 00:18:20.723 0 00:18:20.723 16:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:20.723 16:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:18:20.723 16:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:20.723 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:20.723 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:20.723 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:20.723 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:20.723 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:20.723 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:20.723 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:20.723 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:20.723 nvmf_trace.0 00:18:20.723 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:20.723 16:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 810539 00:18:20.724 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 810539 ']' 00:18:20.724 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 810539 00:18:20.724 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:20.724 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:20.724 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 810539 00:18:20.724 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:20.724 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:20.724 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 810539' 00:18:20.724 killing process with pid 810539 00:18:20.724 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 810539 00:18:20.724 Received shutdown signal, test time was about 1.000000 seconds 00:18:20.724 00:18:20.724 Latency(us) 00:18:20.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.724 =================================================================================================================== 00:18:20.724 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:20.724 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 810539 00:18:20.724 16:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:20.724 16:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:20.724 16:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:20.724 16:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:20.724 16:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:20.724 16:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:20.724 16:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:20.724 rmmod nvme_tcp 00:18:20.724 rmmod nvme_fabrics 00:18:20.724 rmmod nvme_keyring 00:18:20.983 16:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:20.983 16:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:20.983 16:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:20.983 16:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 810487 ']' 00:18:20.983 16:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 810487 00:18:20.983 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 810487 ']' 00:18:20.983 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 810487 00:18:20.983 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:20.983 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:20.983 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 810487 00:18:20.983 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:20.983 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:20.983 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 810487' 00:18:20.983 killing process with pid 810487 00:18:20.983 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 810487 00:18:20.983 16:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 810487 00:18:21.243 16:11:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:21.243 16:11:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:21.243 16:11:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:21.243 16:11:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:21.243 16:11:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:21.243 16:11:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.243 16:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:21.243 16:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.144 16:11:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:23.144 16:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.RnLjDCGy4p /tmp/tmp.ertwfc4j0b /tmp/tmp.cDbVLeguM5 00:18:23.144 00:18:23.144 real 1m19.761s 00:18:23.144 user 2m3.957s 00:18:23.144 sys 0m27.079s 00:18:23.144 16:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:23.144 16:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.144 ************************************ 00:18:23.144 END TEST nvmf_tls 00:18:23.144 ************************************ 00:18:23.144 16:11:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:23.144 16:11:09 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:23.144 16:11:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:23.144 16:11:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:23.144 16:11:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:23.144 ************************************ 00:18:23.144 START TEST nvmf_fips 00:18:23.144 ************************************ 00:18:23.144 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:23.403 * Looking for test storage... 00:18:23.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:23.403 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:23.404 Error setting digest 00:18:23.404 00C2A7FAA47F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:23.404 00C2A7FAA47F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:23.404 16:11:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:25.304 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:25.304 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:25.304 Found net devices under 0000:09:00.0: cvl_0_0 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:25.304 Found net devices under 0000:09:00.1: cvl_0_1 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:25.304 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:25.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:25.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:18:25.563 00:18:25.563 --- 10.0.0.2 ping statistics --- 00:18:25.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.563 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:25.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:25.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:18:25.563 00:18:25.563 --- 10.0.0.1 ping statistics --- 00:18:25.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.563 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=812897 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 812897 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 812897 ']' 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:25.563 16:11:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:25.563 [2024-07-15 16:11:11.463066] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:18:25.563 [2024-07-15 16:11:11.463169] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.563 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.563 [2024-07-15 16:11:11.526979] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.820 [2024-07-15 16:11:11.634290] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.820 [2024-07-15 16:11:11.634346] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.820 [2024-07-15 16:11:11.634369] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.820 [2024-07-15 16:11:11.634380] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.820 [2024-07-15 16:11:11.634390] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.820 [2024-07-15 16:11:11.634416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.753 16:11:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.753 16:11:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:26.753 16:11:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:26.753 16:11:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:26.753 16:11:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:26.753 16:11:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.753 16:11:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:26.753 16:11:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:26.753 16:11:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:26.753 16:11:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:26.753 16:11:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:26.753 16:11:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:26.753 16:11:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:26.753 16:11:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:26.753 [2024-07-15 16:11:12.741562] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.011 [2024-07-15 16:11:12.757560] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:27.011 [2024-07-15 16:11:12.757807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.011 [2024-07-15 16:11:12.788724] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:27.011 malloc0 00:18:27.011 16:11:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.011 16:11:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=813057 00:18:27.011 16:11:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.011 16:11:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 813057 /var/tmp/bdevperf.sock 00:18:27.011 16:11:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 813057 ']' 00:18:27.011 16:11:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.011 16:11:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.011 16:11:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.011 16:11:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.011 16:11:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:27.011 [2024-07-15 16:11:12.879961] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:18:27.011 [2024-07-15 16:11:12.880054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid813057 ] 00:18:27.011 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.011 [2024-07-15 16:11:12.936761] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.269 [2024-07-15 16:11:13.048096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.833 16:11:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.833 16:11:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:27.833 16:11:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:28.120 [2024-07-15 16:11:14.051423] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:28.120 [2024-07-15 16:11:14.051551] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:28.378 TLSTESTn1 00:18:28.378 16:11:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:28.378 Running I/O for 10 seconds... 00:18:38.344 00:18:38.344 Latency(us) 00:18:38.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.344 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:38.344 Verification LBA range: start 0x0 length 0x2000 00:18:38.344 TLSTESTn1 : 10.04 2616.87 10.22 0.00 0.00 48783.78 12233.39 76118.85 00:18:38.344 =================================================================================================================== 00:18:38.344 Total : 2616.87 10.22 0.00 0.00 48783.78 12233.39 76118.85 00:18:38.344 0 00:18:38.344 16:11:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:38.344 16:11:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:38.344 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:18:38.344 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:18:38.344 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:38.344 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:38.344 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:38.344 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:38.344 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:38.344 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:38.344 nvmf_trace.0 00:18:38.602 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:18:38.602 16:11:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 813057 00:18:38.602 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 813057 ']' 00:18:38.602 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 813057 00:18:38.602 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:38.602 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:38.602 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 813057 00:18:38.602 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:38.602 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:38.602 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 813057' 00:18:38.602 killing process with pid 813057 00:18:38.602 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 813057 00:18:38.602 Received shutdown signal, test time was about 10.000000 seconds 00:18:38.602 00:18:38.602 Latency(us) 00:18:38.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.602 =================================================================================================================== 00:18:38.602 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:38.602 [2024-07-15 16:11:24.441726] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:38.602 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 813057 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:38.860 rmmod nvme_tcp 00:18:38.860 rmmod nvme_fabrics 00:18:38.860 rmmod nvme_keyring 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 812897 ']' 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 812897 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 812897 ']' 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 812897 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 812897 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 812897' 00:18:38.860 killing process with pid 812897 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 812897 00:18:38.860 [2024-07-15 16:11:24.786019] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:38.860 16:11:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 812897 00:18:39.118 16:11:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:39.118 16:11:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:39.118 16:11:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:39.118 16:11:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:39.118 16:11:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:39.118 16:11:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.118 16:11:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.118 16:11:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.644 16:11:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:41.644 16:11:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:41.644 00:18:41.644 real 0m17.991s 00:18:41.644 user 0m20.001s 00:18:41.644 sys 0m6.890s 00:18:41.644 16:11:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:41.644 16:11:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:41.644 ************************************ 00:18:41.644 END TEST nvmf_fips 00:18:41.644 ************************************ 00:18:41.644 16:11:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:41.644 16:11:27 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:18:41.644 16:11:27 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:18:41.644 16:11:27 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:18:41.644 16:11:27 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:18:41.644 16:11:27 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:18:41.644 16:11:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:43.593 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:43.593 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:43.593 Found net devices under 0000:09:00.0: cvl_0_0 00:18:43.593 16:11:29 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.594 16:11:29 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.594 16:11:29 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.594 16:11:29 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.594 16:11:29 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.594 16:11:29 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.594 16:11:29 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.594 16:11:29 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.594 16:11:29 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:43.594 Found net devices under 0000:09:00.1: cvl_0_1 00:18:43.594 16:11:29 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.594 16:11:29 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:43.594 16:11:29 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:43.594 16:11:29 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:18:43.594 16:11:29 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:43.594 16:11:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:43.594 16:11:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:43.594 16:11:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:43.594 ************************************ 00:18:43.594 START TEST nvmf_perf_adq 00:18:43.594 ************************************ 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:43.594 * Looking for test storage... 00:18:43.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:43.594 16:11:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.499 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:45.500 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:45.500 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:45.500 Found net devices under 0000:09:00.0: cvl_0_0 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:45.500 Found net devices under 0000:09:00.1: cvl_0_1 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:18:45.500 16:11:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:46.067 16:11:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:47.994 16:11:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:53.270 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:53.270 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:53.270 Found net devices under 0000:09:00.0: cvl_0_0 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.270 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:53.271 Found net devices under 0000:09:00.1: cvl_0_1 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:53.271 16:11:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:53.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:18:53.271 00:18:53.271 --- 10.0.0.2 ping statistics --- 00:18:53.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.271 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:53.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:18:53.271 00:18:53.271 --- 10.0.0.1 ping statistics --- 00:18:53.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.271 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=818938 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 818938 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 818938 ']' 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.271 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:53.271 [2024-07-15 16:11:39.086088] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:18:53.271 [2024-07-15 16:11:39.086160] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.271 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.271 [2024-07-15 16:11:39.149208] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:53.271 [2024-07-15 16:11:39.260035] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.271 [2024-07-15 16:11:39.260092] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.271 [2024-07-15 16:11:39.260123] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.271 [2024-07-15 16:11:39.260135] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.271 [2024-07-15 16:11:39.260146] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.271 [2024-07-15 16:11:39.262976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.271 [2024-07-15 16:11:39.263043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.271 [2024-07-15 16:11:39.263116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.271 [2024-07-15 16:11:39.263112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:53.529 [2024-07-15 16:11:39.462501] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.529 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:53.530 Malloc1 00:18:53.530 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.530 16:11:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:53.530 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.530 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:53.530 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.530 16:11:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:53.530 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.530 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:53.530 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.530 16:11:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.530 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.530 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:53.530 [2024-07-15 16:11:39.512579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.530 16:11:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.530 16:11:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=819081 00:18:53.530 16:11:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:53.530 16:11:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:18:53.788 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.714 16:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:55.714 16:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.714 16:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:55.714 16:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.714 16:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:18:55.714 "tick_rate": 2700000000, 00:18:55.714 "poll_groups": [ 00:18:55.714 { 00:18:55.714 "name": "nvmf_tgt_poll_group_000", 00:18:55.714 "admin_qpairs": 1, 00:18:55.714 "io_qpairs": 1, 00:18:55.714 "current_admin_qpairs": 1, 00:18:55.714 "current_io_qpairs": 1, 00:18:55.714 "pending_bdev_io": 0, 00:18:55.714 "completed_nvme_io": 18858, 00:18:55.714 "transports": [ 00:18:55.714 { 00:18:55.714 "trtype": "TCP" 00:18:55.714 } 00:18:55.714 ] 00:18:55.714 }, 00:18:55.714 { 00:18:55.714 "name": "nvmf_tgt_poll_group_001", 00:18:55.714 "admin_qpairs": 0, 00:18:55.714 "io_qpairs": 1, 00:18:55.714 "current_admin_qpairs": 0, 00:18:55.714 "current_io_qpairs": 1, 00:18:55.714 "pending_bdev_io": 0, 00:18:55.714 "completed_nvme_io": 20102, 00:18:55.714 "transports": [ 00:18:55.714 { 00:18:55.714 "trtype": "TCP" 00:18:55.714 } 00:18:55.714 ] 00:18:55.714 }, 00:18:55.714 { 00:18:55.714 "name": "nvmf_tgt_poll_group_002", 00:18:55.714 "admin_qpairs": 0, 00:18:55.714 "io_qpairs": 1, 00:18:55.714 "current_admin_qpairs": 0, 00:18:55.714 "current_io_qpairs": 1, 00:18:55.714 "pending_bdev_io": 0, 00:18:55.714 "completed_nvme_io": 20238, 00:18:55.714 "transports": [ 00:18:55.714 { 00:18:55.714 "trtype": "TCP" 00:18:55.714 } 00:18:55.714 ] 00:18:55.714 }, 00:18:55.714 { 00:18:55.714 "name": "nvmf_tgt_poll_group_003", 00:18:55.714 "admin_qpairs": 0, 00:18:55.714 "io_qpairs": 1, 00:18:55.714 "current_admin_qpairs": 0, 00:18:55.714 "current_io_qpairs": 1, 00:18:55.715 "pending_bdev_io": 0, 00:18:55.715 "completed_nvme_io": 18530, 00:18:55.715 "transports": [ 00:18:55.715 { 00:18:55.715 "trtype": "TCP" 00:18:55.715 } 00:18:55.715 ] 00:18:55.715 } 00:18:55.715 ] 00:18:55.715 }' 00:18:55.715 16:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:18:55.715 16:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:18:55.715 16:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:18:55.715 16:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:18:55.715 16:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 819081 00:19:03.830 Initializing NVMe Controllers 00:19:03.830 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:03.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:03.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:03.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:03.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:03.830 Initialization complete. Launching workers. 00:19:03.830 ======================================================== 00:19:03.830 Latency(us) 00:19:03.830 Device Information : IOPS MiB/s Average min max 00:19:03.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10199.30 39.84 6274.48 1423.76 10743.95 00:19:03.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10906.27 42.60 5868.68 2523.15 10773.97 00:19:03.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10893.47 42.55 5874.90 2720.91 9118.15 00:19:03.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10353.29 40.44 6181.65 2382.69 10281.20 00:19:03.830 ======================================================== 00:19:03.830 Total : 42352.33 165.44 6044.51 1423.76 10773.97 00:19:03.830 00:19:03.830 16:11:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:03.830 16:11:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:03.830 16:11:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:03.830 16:11:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:03.830 16:11:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:03.830 16:11:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:03.830 16:11:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:03.830 rmmod nvme_tcp 00:19:03.830 rmmod nvme_fabrics 00:19:03.830 rmmod nvme_keyring 00:19:03.830 16:11:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:03.830 16:11:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:03.830 16:11:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:03.830 16:11:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 818938 ']' 00:19:03.830 16:11:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 818938 00:19:03.831 16:11:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 818938 ']' 00:19:03.831 16:11:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 818938 00:19:03.831 16:11:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:03.831 16:11:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:03.831 16:11:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 818938 00:19:03.831 16:11:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:03.831 16:11:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:03.831 16:11:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 818938' 00:19:03.831 killing process with pid 818938 00:19:03.831 16:11:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 818938 00:19:03.831 16:11:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 818938 00:19:04.400 16:11:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:04.400 16:11:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:04.400 16:11:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:04.400 16:11:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:04.400 16:11:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:04.400 16:11:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.400 16:11:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.400 16:11:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.309 16:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:06.309 16:11:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:06.309 16:11:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:07.244 16:11:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:09.148 16:11:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:14.423 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:14.424 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:14.424 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:14.424 Found net devices under 0000:09:00.0: cvl_0_0 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:14.424 Found net devices under 0000:09:00.1: cvl_0_1 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:14.424 16:11:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:14.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:19:14.424 00:19:14.424 --- 10.0.0.2 ping statistics --- 00:19:14.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.424 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:19:14.424 00:19:14.424 --- 10.0.0.1 ping statistics --- 00:19:14.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.424 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:14.424 net.core.busy_poll = 1 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:14.424 net.core.busy_read = 1 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=821722 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 821722 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 821722 ']' 00:19:14.424 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.425 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.425 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.425 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.425 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.425 [2024-07-15 16:12:00.213398] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:19:14.425 [2024-07-15 16:12:00.213495] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.425 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.425 [2024-07-15 16:12:00.278401] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:14.425 [2024-07-15 16:12:00.386281] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.425 [2024-07-15 16:12:00.386336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.425 [2024-07-15 16:12:00.386364] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.425 [2024-07-15 16:12:00.386375] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.425 [2024-07-15 16:12:00.386385] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.425 [2024-07-15 16:12:00.386468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.425 [2024-07-15 16:12:00.386534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.425 [2024-07-15 16:12:00.386602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:14.425 [2024-07-15 16:12:00.386605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.425 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:14.425 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:14.425 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:14.425 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:14.425 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.683 [2024-07-15 16:12:00.608577] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.683 Malloc1 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.683 [2024-07-15 16:12:00.660067] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=821772 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:14.683 16:12:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:14.943 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.844 16:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:16.844 16:12:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.844 16:12:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:16.844 16:12:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.844 16:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:16.844 "tick_rate": 2700000000, 00:19:16.844 "poll_groups": [ 00:19:16.844 { 00:19:16.844 "name": "nvmf_tgt_poll_group_000", 00:19:16.844 "admin_qpairs": 1, 00:19:16.844 "io_qpairs": 1, 00:19:16.844 "current_admin_qpairs": 1, 00:19:16.844 "current_io_qpairs": 1, 00:19:16.844 "pending_bdev_io": 0, 00:19:16.844 "completed_nvme_io": 25066, 00:19:16.844 "transports": [ 00:19:16.844 { 00:19:16.844 "trtype": "TCP" 00:19:16.844 } 00:19:16.844 ] 00:19:16.844 }, 00:19:16.844 { 00:19:16.844 "name": "nvmf_tgt_poll_group_001", 00:19:16.844 "admin_qpairs": 0, 00:19:16.844 "io_qpairs": 3, 00:19:16.844 "current_admin_qpairs": 0, 00:19:16.844 "current_io_qpairs": 3, 00:19:16.844 "pending_bdev_io": 0, 00:19:16.844 "completed_nvme_io": 26497, 00:19:16.844 "transports": [ 00:19:16.844 { 00:19:16.844 "trtype": "TCP" 00:19:16.844 } 00:19:16.844 ] 00:19:16.844 }, 00:19:16.844 { 00:19:16.844 "name": "nvmf_tgt_poll_group_002", 00:19:16.844 "admin_qpairs": 0, 00:19:16.844 "io_qpairs": 0, 00:19:16.844 "current_admin_qpairs": 0, 00:19:16.844 "current_io_qpairs": 0, 00:19:16.844 "pending_bdev_io": 0, 00:19:16.844 "completed_nvme_io": 0, 00:19:16.844 "transports": [ 00:19:16.844 { 00:19:16.844 "trtype": "TCP" 00:19:16.844 } 00:19:16.844 ] 00:19:16.844 }, 00:19:16.844 { 00:19:16.844 "name": "nvmf_tgt_poll_group_003", 00:19:16.844 "admin_qpairs": 0, 00:19:16.844 "io_qpairs": 0, 00:19:16.844 "current_admin_qpairs": 0, 00:19:16.844 "current_io_qpairs": 0, 00:19:16.844 "pending_bdev_io": 0, 00:19:16.844 "completed_nvme_io": 0, 00:19:16.844 "transports": [ 00:19:16.844 { 00:19:16.844 "trtype": "TCP" 00:19:16.844 } 00:19:16.844 ] 00:19:16.844 } 00:19:16.844 ] 00:19:16.844 }' 00:19:16.844 16:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:16.844 16:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:16.844 16:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:16.844 16:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:16.844 16:12:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 821772 00:19:24.956 Initializing NVMe Controllers 00:19:24.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:24.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:24.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:24.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:24.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:24.956 Initialization complete. Launching workers. 00:19:24.956 ======================================================== 00:19:24.956 Latency(us) 00:19:24.956 Device Information : IOPS MiB/s Average min max 00:19:24.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4091.40 15.98 15643.84 1707.08 62517.63 00:19:24.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13651.40 53.33 4688.75 1816.46 45504.21 00:19:24.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5116.30 19.99 12512.78 1875.60 61271.52 00:19:24.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4633.40 18.10 13816.25 2851.50 62844.83 00:19:24.956 ======================================================== 00:19:24.956 Total : 27492.50 107.39 9313.40 1707.08 62844.83 00:19:24.956 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:24.956 rmmod nvme_tcp 00:19:24.956 rmmod nvme_fabrics 00:19:24.956 rmmod nvme_keyring 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 821722 ']' 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 821722 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 821722 ']' 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 821722 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 821722 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 821722' 00:19:24.956 killing process with pid 821722 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 821722 00:19:24.956 16:12:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 821722 00:19:25.215 16:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:25.215 16:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:25.215 16:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:25.215 16:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:25.215 16:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:25.215 16:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.215 16:12:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.215 16:12:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.506 16:12:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:28.506 16:12:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:28.506 00:19:28.506 real 0m45.013s 00:19:28.506 user 2m36.688s 00:19:28.506 sys 0m10.940s 00:19:28.506 16:12:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:28.506 16:12:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.506 ************************************ 00:19:28.506 END TEST nvmf_perf_adq 00:19:28.506 ************************************ 00:19:28.506 16:12:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:28.507 16:12:14 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:28.507 16:12:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:28.507 16:12:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.507 16:12:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:28.507 ************************************ 00:19:28.507 START TEST nvmf_shutdown 00:19:28.507 ************************************ 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:28.507 * Looking for test storage... 00:19:28.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:28.507 ************************************ 00:19:28.507 START TEST nvmf_shutdown_tc1 00:19:28.507 ************************************ 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:28.507 16:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:31.038 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:31.038 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:31.038 Found net devices under 0000:09:00.0: cvl_0_0 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:31.038 Found net devices under 0000:09:00.1: cvl_0_1 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:31.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:19:31.038 00:19:31.038 --- 10.0.0.2 ping statistics --- 00:19:31.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.038 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:31.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:19:31.038 00:19:31.038 --- 10.0.0.1 ping statistics --- 00:19:31.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.038 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:31.038 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=825758 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 825758 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 825758 ']' 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:31.039 [2024-07-15 16:12:16.681335] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:19:31.039 [2024-07-15 16:12:16.681408] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.039 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.039 [2024-07-15 16:12:16.747578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:31.039 [2024-07-15 16:12:16.854820] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.039 [2024-07-15 16:12:16.854873] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.039 [2024-07-15 16:12:16.854898] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.039 [2024-07-15 16:12:16.854909] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.039 [2024-07-15 16:12:16.854918] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.039 [2024-07-15 16:12:16.855050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.039 [2024-07-15 16:12:16.855111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:31.039 [2024-07-15 16:12:16.855135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:31.039 [2024-07-15 16:12:16.855139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.039 16:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:31.039 [2024-07-15 16:12:16.997550] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.039 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:31.298 Malloc1 00:19:31.298 [2024-07-15 16:12:17.072781] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.298 Malloc2 00:19:31.298 Malloc3 00:19:31.298 Malloc4 00:19:31.298 Malloc5 00:19:31.298 Malloc6 00:19:31.556 Malloc7 00:19:31.556 Malloc8 00:19:31.556 Malloc9 00:19:31.556 Malloc10 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=825823 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 825823 /var/tmp/bdevperf.sock 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 825823 ']' 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:31.556 { 00:19:31.556 "params": { 00:19:31.556 "name": "Nvme$subsystem", 00:19:31.556 "trtype": "$TEST_TRANSPORT", 00:19:31.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.556 "adrfam": "ipv4", 00:19:31.556 "trsvcid": "$NVMF_PORT", 00:19:31.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.556 "hdgst": ${hdgst:-false}, 00:19:31.556 "ddgst": ${ddgst:-false} 00:19:31.556 }, 00:19:31.556 "method": "bdev_nvme_attach_controller" 00:19:31.556 } 00:19:31.556 EOF 00:19:31.556 )") 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:31.556 { 00:19:31.556 "params": { 00:19:31.556 "name": "Nvme$subsystem", 00:19:31.556 "trtype": "$TEST_TRANSPORT", 00:19:31.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.556 "adrfam": "ipv4", 00:19:31.556 "trsvcid": "$NVMF_PORT", 00:19:31.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.556 "hdgst": ${hdgst:-false}, 00:19:31.556 "ddgst": ${ddgst:-false} 00:19:31.556 }, 00:19:31.556 "method": "bdev_nvme_attach_controller" 00:19:31.556 } 00:19:31.556 EOF 00:19:31.556 )") 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:31.556 { 00:19:31.556 "params": { 00:19:31.556 "name": "Nvme$subsystem", 00:19:31.556 "trtype": "$TEST_TRANSPORT", 00:19:31.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.556 "adrfam": "ipv4", 00:19:31.556 "trsvcid": "$NVMF_PORT", 00:19:31.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.556 "hdgst": ${hdgst:-false}, 00:19:31.556 "ddgst": ${ddgst:-false} 00:19:31.556 }, 00:19:31.556 "method": "bdev_nvme_attach_controller" 00:19:31.556 } 00:19:31.556 EOF 00:19:31.556 )") 00:19:31.556 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:31.816 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:31.816 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:31.816 { 00:19:31.816 "params": { 00:19:31.816 "name": "Nvme$subsystem", 00:19:31.816 "trtype": "$TEST_TRANSPORT", 00:19:31.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.816 "adrfam": "ipv4", 00:19:31.816 "trsvcid": "$NVMF_PORT", 00:19:31.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.816 "hdgst": ${hdgst:-false}, 00:19:31.816 "ddgst": ${ddgst:-false} 00:19:31.816 }, 00:19:31.816 "method": "bdev_nvme_attach_controller" 00:19:31.816 } 00:19:31.816 EOF 00:19:31.816 )") 00:19:31.816 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:31.816 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:31.816 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:31.816 { 00:19:31.816 "params": { 00:19:31.816 "name": "Nvme$subsystem", 00:19:31.816 "trtype": "$TEST_TRANSPORT", 00:19:31.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.816 "adrfam": "ipv4", 00:19:31.816 "trsvcid": "$NVMF_PORT", 00:19:31.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.816 "hdgst": ${hdgst:-false}, 00:19:31.816 "ddgst": ${ddgst:-false} 00:19:31.816 }, 00:19:31.816 "method": "bdev_nvme_attach_controller" 00:19:31.816 } 00:19:31.816 EOF 00:19:31.816 )") 00:19:31.816 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:31.816 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:31.816 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:31.816 { 00:19:31.816 "params": { 00:19:31.816 "name": "Nvme$subsystem", 00:19:31.816 "trtype": "$TEST_TRANSPORT", 00:19:31.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.816 "adrfam": "ipv4", 00:19:31.816 "trsvcid": "$NVMF_PORT", 00:19:31.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.816 "hdgst": ${hdgst:-false}, 00:19:31.816 "ddgst": ${ddgst:-false} 00:19:31.817 }, 00:19:31.817 "method": "bdev_nvme_attach_controller" 00:19:31.817 } 00:19:31.817 EOF 00:19:31.817 )") 00:19:31.817 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:31.817 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:31.817 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:31.817 { 00:19:31.817 "params": { 00:19:31.817 "name": "Nvme$subsystem", 00:19:31.817 "trtype": "$TEST_TRANSPORT", 00:19:31.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.817 "adrfam": "ipv4", 00:19:31.817 "trsvcid": "$NVMF_PORT", 00:19:31.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.817 "hdgst": ${hdgst:-false}, 00:19:31.817 "ddgst": ${ddgst:-false} 00:19:31.817 }, 00:19:31.817 "method": "bdev_nvme_attach_controller" 00:19:31.817 } 00:19:31.817 EOF 00:19:31.817 )") 00:19:31.817 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:31.817 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:31.817 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:31.817 { 00:19:31.817 "params": { 00:19:31.817 "name": "Nvme$subsystem", 00:19:31.817 "trtype": "$TEST_TRANSPORT", 00:19:31.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.817 "adrfam": "ipv4", 00:19:31.817 "trsvcid": "$NVMF_PORT", 00:19:31.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.817 "hdgst": ${hdgst:-false}, 00:19:31.817 "ddgst": ${ddgst:-false} 00:19:31.817 }, 00:19:31.817 "method": "bdev_nvme_attach_controller" 00:19:31.817 } 00:19:31.817 EOF 00:19:31.817 )") 00:19:31.817 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:31.817 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:31.817 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:31.817 { 00:19:31.817 "params": { 00:19:31.817 "name": "Nvme$subsystem", 00:19:31.817 "trtype": "$TEST_TRANSPORT", 00:19:31.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.817 "adrfam": "ipv4", 00:19:31.817 "trsvcid": "$NVMF_PORT", 00:19:31.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.817 "hdgst": ${hdgst:-false}, 00:19:31.817 "ddgst": ${ddgst:-false} 00:19:31.817 }, 00:19:31.817 "method": "bdev_nvme_attach_controller" 00:19:31.817 } 00:19:31.817 EOF 00:19:31.817 )") 00:19:31.817 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:31.817 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:31.817 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:31.817 { 00:19:31.817 "params": { 00:19:31.817 "name": "Nvme$subsystem", 00:19:31.817 "trtype": "$TEST_TRANSPORT", 00:19:31.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.817 "adrfam": "ipv4", 00:19:31.817 "trsvcid": "$NVMF_PORT", 00:19:31.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.817 "hdgst": ${hdgst:-false}, 00:19:31.817 "ddgst": ${ddgst:-false} 00:19:31.817 }, 00:19:31.817 "method": "bdev_nvme_attach_controller" 00:19:31.817 } 00:19:31.817 EOF 00:19:31.817 )") 00:19:31.817 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:31.817 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:31.817 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:31.817 16:12:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:31.817 "params": { 00:19:31.817 "name": "Nvme1", 00:19:31.817 "trtype": "tcp", 00:19:31.817 "traddr": "10.0.0.2", 00:19:31.817 "adrfam": "ipv4", 00:19:31.817 "trsvcid": "4420", 00:19:31.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.817 "hdgst": false, 00:19:31.817 "ddgst": false 00:19:31.817 }, 00:19:31.817 "method": "bdev_nvme_attach_controller" 00:19:31.817 },{ 00:19:31.817 "params": { 00:19:31.817 "name": "Nvme2", 00:19:31.817 "trtype": "tcp", 00:19:31.817 "traddr": "10.0.0.2", 00:19:31.817 "adrfam": "ipv4", 00:19:31.817 "trsvcid": "4420", 00:19:31.817 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:31.817 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:31.817 "hdgst": false, 00:19:31.817 "ddgst": false 00:19:31.817 }, 00:19:31.817 "method": "bdev_nvme_attach_controller" 00:19:31.817 },{ 00:19:31.817 "params": { 00:19:31.817 "name": "Nvme3", 00:19:31.817 "trtype": "tcp", 00:19:31.817 "traddr": "10.0.0.2", 00:19:31.817 "adrfam": "ipv4", 00:19:31.817 "trsvcid": "4420", 00:19:31.817 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:31.817 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:31.817 "hdgst": false, 00:19:31.817 "ddgst": false 00:19:31.817 }, 00:19:31.817 "method": "bdev_nvme_attach_controller" 00:19:31.817 },{ 00:19:31.817 "params": { 00:19:31.817 "name": "Nvme4", 00:19:31.817 "trtype": "tcp", 00:19:31.817 "traddr": "10.0.0.2", 00:19:31.817 "adrfam": "ipv4", 00:19:31.817 "trsvcid": "4420", 00:19:31.817 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:31.817 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:31.817 "hdgst": false, 00:19:31.817 "ddgst": false 00:19:31.817 }, 00:19:31.817 "method": "bdev_nvme_attach_controller" 00:19:31.817 },{ 00:19:31.817 "params": { 00:19:31.817 "name": "Nvme5", 00:19:31.817 "trtype": "tcp", 00:19:31.817 "traddr": "10.0.0.2", 00:19:31.817 "adrfam": "ipv4", 00:19:31.817 "trsvcid": "4420", 00:19:31.817 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:31.817 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:31.817 "hdgst": false, 00:19:31.817 "ddgst": false 00:19:31.817 }, 00:19:31.817 "method": "bdev_nvme_attach_controller" 00:19:31.817 },{ 00:19:31.817 "params": { 00:19:31.817 "name": "Nvme6", 00:19:31.817 "trtype": "tcp", 00:19:31.817 "traddr": "10.0.0.2", 00:19:31.817 "adrfam": "ipv4", 00:19:31.817 "trsvcid": "4420", 00:19:31.817 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:31.817 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:31.817 "hdgst": false, 00:19:31.817 "ddgst": false 00:19:31.817 }, 00:19:31.817 "method": "bdev_nvme_attach_controller" 00:19:31.817 },{ 00:19:31.817 "params": { 00:19:31.817 "name": "Nvme7", 00:19:31.817 "trtype": "tcp", 00:19:31.817 "traddr": "10.0.0.2", 00:19:31.817 "adrfam": "ipv4", 00:19:31.817 "trsvcid": "4420", 00:19:31.817 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:31.817 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:31.817 "hdgst": false, 00:19:31.817 "ddgst": false 00:19:31.817 }, 00:19:31.817 "method": "bdev_nvme_attach_controller" 00:19:31.817 },{ 00:19:31.817 "params": { 00:19:31.817 "name": "Nvme8", 00:19:31.817 "trtype": "tcp", 00:19:31.817 "traddr": "10.0.0.2", 00:19:31.817 "adrfam": "ipv4", 00:19:31.817 "trsvcid": "4420", 00:19:31.817 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:31.817 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:31.817 "hdgst": false, 00:19:31.817 "ddgst": false 00:19:31.817 }, 00:19:31.817 "method": "bdev_nvme_attach_controller" 00:19:31.817 },{ 00:19:31.817 "params": { 00:19:31.817 "name": "Nvme9", 00:19:31.817 "trtype": "tcp", 00:19:31.817 "traddr": "10.0.0.2", 00:19:31.817 "adrfam": "ipv4", 00:19:31.817 "trsvcid": "4420", 00:19:31.817 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:31.817 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:31.817 "hdgst": false, 00:19:31.817 "ddgst": false 00:19:31.817 }, 00:19:31.817 "method": "bdev_nvme_attach_controller" 00:19:31.817 },{ 00:19:31.817 "params": { 00:19:31.817 "name": "Nvme10", 00:19:31.817 "trtype": "tcp", 00:19:31.817 "traddr": "10.0.0.2", 00:19:31.817 "adrfam": "ipv4", 00:19:31.817 "trsvcid": "4420", 00:19:31.817 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:31.817 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:31.817 "hdgst": false, 00:19:31.817 "ddgst": false 00:19:31.817 }, 00:19:31.817 "method": "bdev_nvme_attach_controller" 00:19:31.817 }' 00:19:31.817 [2024-07-15 16:12:17.591987] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:19:31.817 [2024-07-15 16:12:17.592063] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:31.817 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.817 [2024-07-15 16:12:17.657385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.817 [2024-07-15 16:12:17.769401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.723 16:12:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:33.723 16:12:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:33.723 16:12:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:33.723 16:12:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.723 16:12:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:33.723 16:12:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.723 16:12:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 825823 00:19:33.723 16:12:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:33.723 16:12:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:34.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 825823 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:34.657 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 825758 00:19:34.657 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:34.657 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:34.657 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:34.657 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:34.657 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.657 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.657 { 00:19:34.657 "params": { 00:19:34.657 "name": "Nvme$subsystem", 00:19:34.657 "trtype": "$TEST_TRANSPORT", 00:19:34.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.657 "adrfam": "ipv4", 00:19:34.657 "trsvcid": "$NVMF_PORT", 00:19:34.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.657 "hdgst": ${hdgst:-false}, 00:19:34.657 "ddgst": ${ddgst:-false} 00:19:34.657 }, 00:19:34.657 "method": "bdev_nvme_attach_controller" 00:19:34.657 } 00:19:34.657 EOF 00:19:34.657 )") 00:19:34.657 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:34.657 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.657 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.657 { 00:19:34.657 "params": { 00:19:34.657 "name": "Nvme$subsystem", 00:19:34.657 "trtype": "$TEST_TRANSPORT", 00:19:34.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.657 "adrfam": "ipv4", 00:19:34.657 "trsvcid": "$NVMF_PORT", 00:19:34.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.657 "hdgst": ${hdgst:-false}, 00:19:34.657 "ddgst": ${ddgst:-false} 00:19:34.657 }, 00:19:34.657 "method": "bdev_nvme_attach_controller" 00:19:34.657 } 00:19:34.657 EOF 00:19:34.657 )") 00:19:34.657 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:34.657 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.657 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.657 { 00:19:34.657 "params": { 00:19:34.657 "name": "Nvme$subsystem", 00:19:34.657 "trtype": "$TEST_TRANSPORT", 00:19:34.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.657 "adrfam": "ipv4", 00:19:34.657 "trsvcid": "$NVMF_PORT", 00:19:34.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.657 "hdgst": ${hdgst:-false}, 00:19:34.657 "ddgst": ${ddgst:-false} 00:19:34.657 }, 00:19:34.657 "method": "bdev_nvme_attach_controller" 00:19:34.657 } 00:19:34.657 EOF 00:19:34.657 )") 00:19:34.657 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:34.657 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.657 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.657 { 00:19:34.657 "params": { 00:19:34.657 "name": "Nvme$subsystem", 00:19:34.657 "trtype": "$TEST_TRANSPORT", 00:19:34.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.657 "adrfam": "ipv4", 00:19:34.657 "trsvcid": "$NVMF_PORT", 00:19:34.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.657 "hdgst": ${hdgst:-false}, 00:19:34.657 "ddgst": ${ddgst:-false} 00:19:34.657 }, 00:19:34.657 "method": "bdev_nvme_attach_controller" 00:19:34.657 } 00:19:34.657 EOF 00:19:34.657 )") 00:19:34.657 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.658 { 00:19:34.658 "params": { 00:19:34.658 "name": "Nvme$subsystem", 00:19:34.658 "trtype": "$TEST_TRANSPORT", 00:19:34.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.658 "adrfam": "ipv4", 00:19:34.658 "trsvcid": "$NVMF_PORT", 00:19:34.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.658 "hdgst": ${hdgst:-false}, 00:19:34.658 "ddgst": ${ddgst:-false} 00:19:34.658 }, 00:19:34.658 "method": "bdev_nvme_attach_controller" 00:19:34.658 } 00:19:34.658 EOF 00:19:34.658 )") 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.658 { 00:19:34.658 "params": { 00:19:34.658 "name": "Nvme$subsystem", 00:19:34.658 "trtype": "$TEST_TRANSPORT", 00:19:34.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.658 "adrfam": "ipv4", 00:19:34.658 "trsvcid": "$NVMF_PORT", 00:19:34.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.658 "hdgst": ${hdgst:-false}, 00:19:34.658 "ddgst": ${ddgst:-false} 00:19:34.658 }, 00:19:34.658 "method": "bdev_nvme_attach_controller" 00:19:34.658 } 00:19:34.658 EOF 00:19:34.658 )") 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.658 { 00:19:34.658 "params": { 00:19:34.658 "name": "Nvme$subsystem", 00:19:34.658 "trtype": "$TEST_TRANSPORT", 00:19:34.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.658 "adrfam": "ipv4", 00:19:34.658 "trsvcid": "$NVMF_PORT", 00:19:34.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.658 "hdgst": ${hdgst:-false}, 00:19:34.658 "ddgst": ${ddgst:-false} 00:19:34.658 }, 00:19:34.658 "method": "bdev_nvme_attach_controller" 00:19:34.658 } 00:19:34.658 EOF 00:19:34.658 )") 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.658 { 00:19:34.658 "params": { 00:19:34.658 "name": "Nvme$subsystem", 00:19:34.658 "trtype": "$TEST_TRANSPORT", 00:19:34.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.658 "adrfam": "ipv4", 00:19:34.658 "trsvcid": "$NVMF_PORT", 00:19:34.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.658 "hdgst": ${hdgst:-false}, 00:19:34.658 "ddgst": ${ddgst:-false} 00:19:34.658 }, 00:19:34.658 "method": "bdev_nvme_attach_controller" 00:19:34.658 } 00:19:34.658 EOF 00:19:34.658 )") 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.658 { 00:19:34.658 "params": { 00:19:34.658 "name": "Nvme$subsystem", 00:19:34.658 "trtype": "$TEST_TRANSPORT", 00:19:34.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.658 "adrfam": "ipv4", 00:19:34.658 "trsvcid": "$NVMF_PORT", 00:19:34.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.658 "hdgst": ${hdgst:-false}, 00:19:34.658 "ddgst": ${ddgst:-false} 00:19:34.658 }, 00:19:34.658 "method": "bdev_nvme_attach_controller" 00:19:34.658 } 00:19:34.658 EOF 00:19:34.658 )") 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.658 { 00:19:34.658 "params": { 00:19:34.658 "name": "Nvme$subsystem", 00:19:34.658 "trtype": "$TEST_TRANSPORT", 00:19:34.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.658 "adrfam": "ipv4", 00:19:34.658 "trsvcid": "$NVMF_PORT", 00:19:34.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.658 "hdgst": ${hdgst:-false}, 00:19:34.658 "ddgst": ${ddgst:-false} 00:19:34.658 }, 00:19:34.658 "method": "bdev_nvme_attach_controller" 00:19:34.658 } 00:19:34.658 EOF 00:19:34.658 )") 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:34.658 16:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:34.658 "params": { 00:19:34.658 "name": "Nvme1", 00:19:34.658 "trtype": "tcp", 00:19:34.658 "traddr": "10.0.0.2", 00:19:34.658 "adrfam": "ipv4", 00:19:34.658 "trsvcid": "4420", 00:19:34.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.658 "hdgst": false, 00:19:34.658 "ddgst": false 00:19:34.658 }, 00:19:34.658 "method": "bdev_nvme_attach_controller" 00:19:34.658 },{ 00:19:34.658 "params": { 00:19:34.658 "name": "Nvme2", 00:19:34.658 "trtype": "tcp", 00:19:34.658 "traddr": "10.0.0.2", 00:19:34.658 "adrfam": "ipv4", 00:19:34.658 "trsvcid": "4420", 00:19:34.658 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:34.658 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:34.658 "hdgst": false, 00:19:34.658 "ddgst": false 00:19:34.658 }, 00:19:34.658 "method": "bdev_nvme_attach_controller" 00:19:34.658 },{ 00:19:34.658 "params": { 00:19:34.658 "name": "Nvme3", 00:19:34.658 "trtype": "tcp", 00:19:34.658 "traddr": "10.0.0.2", 00:19:34.658 "adrfam": "ipv4", 00:19:34.658 "trsvcid": "4420", 00:19:34.658 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:34.658 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:34.658 "hdgst": false, 00:19:34.658 "ddgst": false 00:19:34.658 }, 00:19:34.658 "method": "bdev_nvme_attach_controller" 00:19:34.658 },{ 00:19:34.658 "params": { 00:19:34.658 "name": "Nvme4", 00:19:34.658 "trtype": "tcp", 00:19:34.658 "traddr": "10.0.0.2", 00:19:34.658 "adrfam": "ipv4", 00:19:34.658 "trsvcid": "4420", 00:19:34.658 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:34.659 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:34.659 "hdgst": false, 00:19:34.659 "ddgst": false 00:19:34.659 }, 00:19:34.659 "method": "bdev_nvme_attach_controller" 00:19:34.659 },{ 00:19:34.659 "params": { 00:19:34.659 "name": "Nvme5", 00:19:34.659 "trtype": "tcp", 00:19:34.659 "traddr": "10.0.0.2", 00:19:34.659 "adrfam": "ipv4", 00:19:34.659 "trsvcid": "4420", 00:19:34.659 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:34.659 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:34.659 "hdgst": false, 00:19:34.659 "ddgst": false 00:19:34.659 }, 00:19:34.659 "method": "bdev_nvme_attach_controller" 00:19:34.659 },{ 00:19:34.659 "params": { 00:19:34.659 "name": "Nvme6", 00:19:34.659 "trtype": "tcp", 00:19:34.659 "traddr": "10.0.0.2", 00:19:34.659 "adrfam": "ipv4", 00:19:34.659 "trsvcid": "4420", 00:19:34.659 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:34.659 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:34.659 "hdgst": false, 00:19:34.659 "ddgst": false 00:19:34.659 }, 00:19:34.659 "method": "bdev_nvme_attach_controller" 00:19:34.659 },{ 00:19:34.659 "params": { 00:19:34.659 "name": "Nvme7", 00:19:34.659 "trtype": "tcp", 00:19:34.659 "traddr": "10.0.0.2", 00:19:34.659 "adrfam": "ipv4", 00:19:34.659 "trsvcid": "4420", 00:19:34.659 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:34.659 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:34.659 "hdgst": false, 00:19:34.659 "ddgst": false 00:19:34.659 }, 00:19:34.659 "method": "bdev_nvme_attach_controller" 00:19:34.659 },{ 00:19:34.659 "params": { 00:19:34.659 "name": "Nvme8", 00:19:34.659 "trtype": "tcp", 00:19:34.659 "traddr": "10.0.0.2", 00:19:34.659 "adrfam": "ipv4", 00:19:34.659 "trsvcid": "4420", 00:19:34.659 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:34.659 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:34.659 "hdgst": false, 00:19:34.659 "ddgst": false 00:19:34.659 }, 00:19:34.659 "method": "bdev_nvme_attach_controller" 00:19:34.659 },{ 00:19:34.659 "params": { 00:19:34.659 "name": "Nvme9", 00:19:34.659 "trtype": "tcp", 00:19:34.659 "traddr": "10.0.0.2", 00:19:34.659 "adrfam": "ipv4", 00:19:34.659 "trsvcid": "4420", 00:19:34.659 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:34.659 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:34.659 "hdgst": false, 00:19:34.659 "ddgst": false 00:19:34.659 }, 00:19:34.659 "method": "bdev_nvme_attach_controller" 00:19:34.659 },{ 00:19:34.659 "params": { 00:19:34.659 "name": "Nvme10", 00:19:34.659 "trtype": "tcp", 00:19:34.659 "traddr": "10.0.0.2", 00:19:34.659 "adrfam": "ipv4", 00:19:34.659 "trsvcid": "4420", 00:19:34.659 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:34.659 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:34.659 "hdgst": false, 00:19:34.659 "ddgst": false 00:19:34.659 }, 00:19:34.659 "method": "bdev_nvme_attach_controller" 00:19:34.659 }' 00:19:34.659 [2024-07-15 16:12:20.651234] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:19:34.659 [2024-07-15 16:12:20.651334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid826240 ] 00:19:34.918 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.918 [2024-07-15 16:12:20.717623] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.918 [2024-07-15 16:12:20.830669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.296 Running I/O for 1 seconds... 00:19:37.672 00:19:37.672 Latency(us) 00:19:37.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.672 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.672 Verification LBA range: start 0x0 length 0x400 00:19:37.672 Nvme1n1 : 1.13 226.65 14.17 0.00 0.00 279605.48 35340.89 253211.69 00:19:37.672 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.672 Verification LBA range: start 0x0 length 0x400 00:19:37.672 Nvme2n1 : 1.14 224.91 14.06 0.00 0.00 277168.36 19806.44 259425.47 00:19:37.672 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.672 Verification LBA range: start 0x0 length 0x400 00:19:37.672 Nvme3n1 : 1.12 229.04 14.32 0.00 0.00 266528.62 19223.89 254765.13 00:19:37.672 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.672 Verification LBA range: start 0x0 length 0x400 00:19:37.672 Nvme4n1 : 1.11 234.24 14.64 0.00 0.00 255594.96 9660.49 251658.24 00:19:37.672 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.672 Verification LBA range: start 0x0 length 0x400 00:19:37.672 Nvme5n1 : 1.18 271.17 16.95 0.00 0.00 217562.38 19515.16 256318.58 00:19:37.672 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.672 Verification LBA range: start 0x0 length 0x400 00:19:37.672 Nvme6n1 : 1.15 224.90 14.06 0.00 0.00 259077.80 1565.58 260978.92 00:19:37.672 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.672 Verification LBA range: start 0x0 length 0x400 00:19:37.672 Nvme7n1 : 1.13 226.12 14.13 0.00 0.00 253015.61 27379.48 253211.69 00:19:37.672 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.672 Verification LBA range: start 0x0 length 0x400 00:19:37.672 Nvme8n1 : 1.15 225.77 14.11 0.00 0.00 249240.97 1347.13 262532.36 00:19:37.672 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.672 Verification LBA range: start 0x0 length 0x400 00:19:37.672 Nvme9n1 : 1.16 220.95 13.81 0.00 0.00 250879.05 20097.71 265639.25 00:19:37.672 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.672 Verification LBA range: start 0x0 length 0x400 00:19:37.672 Nvme10n1 : 1.19 267.88 16.74 0.00 0.00 204214.23 4053.52 287387.50 00:19:37.672 =================================================================================================================== 00:19:37.672 Total : 2351.64 146.98 0.00 0.00 249387.11 1347.13 287387.50 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:37.931 rmmod nvme_tcp 00:19:37.931 rmmod nvme_fabrics 00:19:37.931 rmmod nvme_keyring 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 825758 ']' 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 825758 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 825758 ']' 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 825758 00:19:37.931 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:19:37.932 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:37.932 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 825758 00:19:37.932 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:37.932 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:37.932 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 825758' 00:19:37.932 killing process with pid 825758 00:19:37.932 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 825758 00:19:37.932 16:12:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 825758 00:19:38.496 16:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:38.496 16:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:38.496 16:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:38.496 16:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:38.496 16:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:38.496 16:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.496 16:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.496 16:12:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.399 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:40.399 00:19:40.399 real 0m11.994s 00:19:40.399 user 0m34.464s 00:19:40.399 sys 0m3.359s 00:19:40.399 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:40.399 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:40.399 ************************************ 00:19:40.399 END TEST nvmf_shutdown_tc1 00:19:40.399 ************************************ 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:40.658 ************************************ 00:19:40.658 START TEST nvmf_shutdown_tc2 00:19:40.658 ************************************ 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:40.658 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:40.658 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:40.658 Found net devices under 0000:09:00.0: cvl_0_0 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.658 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:40.659 Found net devices under 0000:09:00.1: cvl_0_1 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:40.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:19:40.659 00:19:40.659 --- 10.0.0.2 ping statistics --- 00:19:40.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.659 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:19:40.659 00:19:40.659 --- 10.0.0.1 ping statistics --- 00:19:40.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.659 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=827012 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 827012 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 827012 ']' 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.659 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:40.659 [2024-07-15 16:12:26.660314] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:19:40.659 [2024-07-15 16:12:26.660411] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.936 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.936 [2024-07-15 16:12:26.727619] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:40.936 [2024-07-15 16:12:26.836691] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.936 [2024-07-15 16:12:26.836761] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.936 [2024-07-15 16:12:26.836774] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.936 [2024-07-15 16:12:26.836785] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.936 [2024-07-15 16:12:26.836794] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.936 [2024-07-15 16:12:26.836891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.936 [2024-07-15 16:12:26.836914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:40.936 [2024-07-15 16:12:26.837036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:40.936 [2024-07-15 16:12:26.837040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.196 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.196 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:41.196 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:41.196 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:41.196 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:41.196 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.196 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:41.196 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.196 16:12:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:41.196 [2024-07-15 16:12:26.992918] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.196 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:41.197 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.197 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:41.197 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:41.197 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.197 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:41.197 Malloc1 00:19:41.197 [2024-07-15 16:12:27.082666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.197 Malloc2 00:19:41.197 Malloc3 00:19:41.456 Malloc4 00:19:41.456 Malloc5 00:19:41.456 Malloc6 00:19:41.456 Malloc7 00:19:41.456 Malloc8 00:19:41.456 Malloc9 00:19:41.715 Malloc10 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=827186 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 827186 /var/tmp/bdevperf.sock 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 827186 ']' 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.716 { 00:19:41.716 "params": { 00:19:41.716 "name": "Nvme$subsystem", 00:19:41.716 "trtype": "$TEST_TRANSPORT", 00:19:41.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.716 "adrfam": "ipv4", 00:19:41.716 "trsvcid": "$NVMF_PORT", 00:19:41.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.716 "hdgst": ${hdgst:-false}, 00:19:41.716 "ddgst": ${ddgst:-false} 00:19:41.716 }, 00:19:41.716 "method": "bdev_nvme_attach_controller" 00:19:41.716 } 00:19:41.716 EOF 00:19:41.716 )") 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.716 { 00:19:41.716 "params": { 00:19:41.716 "name": "Nvme$subsystem", 00:19:41.716 "trtype": "$TEST_TRANSPORT", 00:19:41.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.716 "adrfam": "ipv4", 00:19:41.716 "trsvcid": "$NVMF_PORT", 00:19:41.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.716 "hdgst": ${hdgst:-false}, 00:19:41.716 "ddgst": ${ddgst:-false} 00:19:41.716 }, 00:19:41.716 "method": "bdev_nvme_attach_controller" 00:19:41.716 } 00:19:41.716 EOF 00:19:41.716 )") 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.716 { 00:19:41.716 "params": { 00:19:41.716 "name": "Nvme$subsystem", 00:19:41.716 "trtype": "$TEST_TRANSPORT", 00:19:41.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.716 "adrfam": "ipv4", 00:19:41.716 "trsvcid": "$NVMF_PORT", 00:19:41.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.716 "hdgst": ${hdgst:-false}, 00:19:41.716 "ddgst": ${ddgst:-false} 00:19:41.716 }, 00:19:41.716 "method": "bdev_nvme_attach_controller" 00:19:41.716 } 00:19:41.716 EOF 00:19:41.716 )") 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.716 { 00:19:41.716 "params": { 00:19:41.716 "name": "Nvme$subsystem", 00:19:41.716 "trtype": "$TEST_TRANSPORT", 00:19:41.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.716 "adrfam": "ipv4", 00:19:41.716 "trsvcid": "$NVMF_PORT", 00:19:41.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.716 "hdgst": ${hdgst:-false}, 00:19:41.716 "ddgst": ${ddgst:-false} 00:19:41.716 }, 00:19:41.716 "method": "bdev_nvme_attach_controller" 00:19:41.716 } 00:19:41.716 EOF 00:19:41.716 )") 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.716 { 00:19:41.716 "params": { 00:19:41.716 "name": "Nvme$subsystem", 00:19:41.716 "trtype": "$TEST_TRANSPORT", 00:19:41.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.716 "adrfam": "ipv4", 00:19:41.716 "trsvcid": "$NVMF_PORT", 00:19:41.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.716 "hdgst": ${hdgst:-false}, 00:19:41.716 "ddgst": ${ddgst:-false} 00:19:41.716 }, 00:19:41.716 "method": "bdev_nvme_attach_controller" 00:19:41.716 } 00:19:41.716 EOF 00:19:41.716 )") 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.716 { 00:19:41.716 "params": { 00:19:41.716 "name": "Nvme$subsystem", 00:19:41.716 "trtype": "$TEST_TRANSPORT", 00:19:41.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.716 "adrfam": "ipv4", 00:19:41.716 "trsvcid": "$NVMF_PORT", 00:19:41.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.716 "hdgst": ${hdgst:-false}, 00:19:41.716 "ddgst": ${ddgst:-false} 00:19:41.716 }, 00:19:41.716 "method": "bdev_nvme_attach_controller" 00:19:41.716 } 00:19:41.716 EOF 00:19:41.716 )") 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.716 { 00:19:41.716 "params": { 00:19:41.716 "name": "Nvme$subsystem", 00:19:41.716 "trtype": "$TEST_TRANSPORT", 00:19:41.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.716 "adrfam": "ipv4", 00:19:41.716 "trsvcid": "$NVMF_PORT", 00:19:41.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.716 "hdgst": ${hdgst:-false}, 00:19:41.716 "ddgst": ${ddgst:-false} 00:19:41.716 }, 00:19:41.716 "method": "bdev_nvme_attach_controller" 00:19:41.716 } 00:19:41.716 EOF 00:19:41.716 )") 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.716 { 00:19:41.716 "params": { 00:19:41.716 "name": "Nvme$subsystem", 00:19:41.716 "trtype": "$TEST_TRANSPORT", 00:19:41.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.716 "adrfam": "ipv4", 00:19:41.716 "trsvcid": "$NVMF_PORT", 00:19:41.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.716 "hdgst": ${hdgst:-false}, 00:19:41.716 "ddgst": ${ddgst:-false} 00:19:41.716 }, 00:19:41.716 "method": "bdev_nvme_attach_controller" 00:19:41.716 } 00:19:41.716 EOF 00:19:41.716 )") 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.716 { 00:19:41.716 "params": { 00:19:41.716 "name": "Nvme$subsystem", 00:19:41.716 "trtype": "$TEST_TRANSPORT", 00:19:41.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.716 "adrfam": "ipv4", 00:19:41.716 "trsvcid": "$NVMF_PORT", 00:19:41.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.716 "hdgst": ${hdgst:-false}, 00:19:41.716 "ddgst": ${ddgst:-false} 00:19:41.716 }, 00:19:41.716 "method": "bdev_nvme_attach_controller" 00:19:41.716 } 00:19:41.716 EOF 00:19:41.716 )") 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:41.716 { 00:19:41.716 "params": { 00:19:41.716 "name": "Nvme$subsystem", 00:19:41.716 "trtype": "$TEST_TRANSPORT", 00:19:41.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.716 "adrfam": "ipv4", 00:19:41.716 "trsvcid": "$NVMF_PORT", 00:19:41.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.716 "hdgst": ${hdgst:-false}, 00:19:41.716 "ddgst": ${ddgst:-false} 00:19:41.716 }, 00:19:41.716 "method": "bdev_nvme_attach_controller" 00:19:41.716 } 00:19:41.716 EOF 00:19:41.716 )") 00:19:41.716 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:41.717 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:41.717 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:41.717 16:12:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:41.717 "params": { 00:19:41.717 "name": "Nvme1", 00:19:41.717 "trtype": "tcp", 00:19:41.717 "traddr": "10.0.0.2", 00:19:41.717 "adrfam": "ipv4", 00:19:41.717 "trsvcid": "4420", 00:19:41.717 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.717 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:41.717 "hdgst": false, 00:19:41.717 "ddgst": false 00:19:41.717 }, 00:19:41.717 "method": "bdev_nvme_attach_controller" 00:19:41.717 },{ 00:19:41.717 "params": { 00:19:41.717 "name": "Nvme2", 00:19:41.717 "trtype": "tcp", 00:19:41.717 "traddr": "10.0.0.2", 00:19:41.717 "adrfam": "ipv4", 00:19:41.717 "trsvcid": "4420", 00:19:41.717 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:41.717 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:41.717 "hdgst": false, 00:19:41.717 "ddgst": false 00:19:41.717 }, 00:19:41.717 "method": "bdev_nvme_attach_controller" 00:19:41.717 },{ 00:19:41.717 "params": { 00:19:41.717 "name": "Nvme3", 00:19:41.717 "trtype": "tcp", 00:19:41.717 "traddr": "10.0.0.2", 00:19:41.717 "adrfam": "ipv4", 00:19:41.717 "trsvcid": "4420", 00:19:41.717 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:41.717 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:41.717 "hdgst": false, 00:19:41.717 "ddgst": false 00:19:41.717 }, 00:19:41.717 "method": "bdev_nvme_attach_controller" 00:19:41.717 },{ 00:19:41.717 "params": { 00:19:41.717 "name": "Nvme4", 00:19:41.717 "trtype": "tcp", 00:19:41.717 "traddr": "10.0.0.2", 00:19:41.717 "adrfam": "ipv4", 00:19:41.717 "trsvcid": "4420", 00:19:41.717 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:41.717 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:41.717 "hdgst": false, 00:19:41.717 "ddgst": false 00:19:41.717 }, 00:19:41.717 "method": "bdev_nvme_attach_controller" 00:19:41.717 },{ 00:19:41.717 "params": { 00:19:41.717 "name": "Nvme5", 00:19:41.717 "trtype": "tcp", 00:19:41.717 "traddr": "10.0.0.2", 00:19:41.717 "adrfam": "ipv4", 00:19:41.717 "trsvcid": "4420", 00:19:41.717 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:41.717 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:41.717 "hdgst": false, 00:19:41.717 "ddgst": false 00:19:41.717 }, 00:19:41.717 "method": "bdev_nvme_attach_controller" 00:19:41.717 },{ 00:19:41.717 "params": { 00:19:41.717 "name": "Nvme6", 00:19:41.717 "trtype": "tcp", 00:19:41.717 "traddr": "10.0.0.2", 00:19:41.717 "adrfam": "ipv4", 00:19:41.717 "trsvcid": "4420", 00:19:41.717 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:41.717 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:41.717 "hdgst": false, 00:19:41.717 "ddgst": false 00:19:41.717 }, 00:19:41.717 "method": "bdev_nvme_attach_controller" 00:19:41.717 },{ 00:19:41.717 "params": { 00:19:41.717 "name": "Nvme7", 00:19:41.717 "trtype": "tcp", 00:19:41.717 "traddr": "10.0.0.2", 00:19:41.717 "adrfam": "ipv4", 00:19:41.717 "trsvcid": "4420", 00:19:41.717 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:41.717 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:41.717 "hdgst": false, 00:19:41.717 "ddgst": false 00:19:41.717 }, 00:19:41.717 "method": "bdev_nvme_attach_controller" 00:19:41.717 },{ 00:19:41.717 "params": { 00:19:41.717 "name": "Nvme8", 00:19:41.717 "trtype": "tcp", 00:19:41.717 "traddr": "10.0.0.2", 00:19:41.717 "adrfam": "ipv4", 00:19:41.717 "trsvcid": "4420", 00:19:41.717 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:41.717 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:41.717 "hdgst": false, 00:19:41.717 "ddgst": false 00:19:41.717 }, 00:19:41.717 "method": "bdev_nvme_attach_controller" 00:19:41.717 },{ 00:19:41.717 "params": { 00:19:41.717 "name": "Nvme9", 00:19:41.717 "trtype": "tcp", 00:19:41.717 "traddr": "10.0.0.2", 00:19:41.717 "adrfam": "ipv4", 00:19:41.717 "trsvcid": "4420", 00:19:41.717 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:41.717 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:41.717 "hdgst": false, 00:19:41.717 "ddgst": false 00:19:41.717 }, 00:19:41.717 "method": "bdev_nvme_attach_controller" 00:19:41.717 },{ 00:19:41.717 "params": { 00:19:41.717 "name": "Nvme10", 00:19:41.717 "trtype": "tcp", 00:19:41.717 "traddr": "10.0.0.2", 00:19:41.717 "adrfam": "ipv4", 00:19:41.717 "trsvcid": "4420", 00:19:41.717 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:41.717 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:41.717 "hdgst": false, 00:19:41.717 "ddgst": false 00:19:41.717 }, 00:19:41.717 "method": "bdev_nvme_attach_controller" 00:19:41.717 }' 00:19:41.717 [2024-07-15 16:12:27.601366] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:19:41.717 [2024-07-15 16:12:27.601453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid827186 ] 00:19:41.717 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.717 [2024-07-15 16:12:27.665344] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.977 [2024-07-15 16:12:27.775588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.353 Running I/O for 10 seconds... 00:19:43.613 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.613 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:43.613 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:43.613 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.613 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:43.873 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.873 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:43.873 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:43.873 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:43.873 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:43.873 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:43.873 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:43.873 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:43.873 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:43.873 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:43.873 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.873 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:43.873 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.873 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:43.873 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:43.873 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:44.131 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:44.131 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:44.131 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:44.131 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:44.131 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.131 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:44.131 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.131 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:44.131 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:44.131 16:12:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 827186 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 827186 ']' 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 827186 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 827186 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 827186' 00:19:44.389 killing process with pid 827186 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 827186 00:19:44.389 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 827186 00:19:44.389 Received shutdown signal, test time was about 0.981249 seconds 00:19:44.389 00:19:44.389 Latency(us) 00:19:44.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.389 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.389 Verification LBA range: start 0x0 length 0x400 00:19:44.389 Nvme1n1 : 0.98 262.15 16.38 0.00 0.00 241327.22 23301.69 245444.46 00:19:44.389 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.389 Verification LBA range: start 0x0 length 0x400 00:19:44.389 Nvme2n1 : 0.97 264.00 16.50 0.00 0.00 235071.72 18350.08 251658.24 00:19:44.389 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.389 Verification LBA range: start 0x0 length 0x400 00:19:44.389 Nvme3n1 : 0.98 261.11 16.32 0.00 0.00 233208.04 18155.90 257872.02 00:19:44.389 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.389 Verification LBA range: start 0x0 length 0x400 00:19:44.389 Nvme4n1 : 0.96 267.70 16.73 0.00 0.00 222584.04 17573.36 251658.24 00:19:44.389 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.389 Verification LBA range: start 0x0 length 0x400 00:19:44.389 Nvme5n1 : 0.96 200.06 12.50 0.00 0.00 291941.14 24369.68 246997.90 00:19:44.389 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.389 Verification LBA range: start 0x0 length 0x400 00:19:44.389 Nvme6n1 : 0.95 205.89 12.87 0.00 0.00 276522.86 1377.47 262532.36 00:19:44.389 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.389 Verification LBA range: start 0x0 length 0x400 00:19:44.389 Nvme7n1 : 0.94 205.06 12.82 0.00 0.00 271910.43 18544.26 256318.58 00:19:44.389 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.389 Verification LBA range: start 0x0 length 0x400 00:19:44.389 Nvme8n1 : 0.97 263.28 16.45 0.00 0.00 208590.51 18447.17 254765.13 00:19:44.389 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.389 Verification LBA range: start 0x0 length 0x400 00:19:44.389 Nvme9n1 : 0.96 203.19 12.70 0.00 0.00 263363.51 2475.80 285834.05 00:19:44.389 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:44.389 Verification LBA range: start 0x0 length 0x400 00:19:44.389 Nvme10n1 : 0.94 209.08 13.07 0.00 0.00 247233.20 7815.77 251658.24 00:19:44.389 =================================================================================================================== 00:19:44.389 Total : 2341.52 146.34 0.00 0.00 246258.88 1377.47 285834.05 00:19:44.647 16:12:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 827012 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:46.024 rmmod nvme_tcp 00:19:46.024 rmmod nvme_fabrics 00:19:46.024 rmmod nvme_keyring 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 827012 ']' 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 827012 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 827012 ']' 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 827012 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 827012 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 827012' 00:19:46.024 killing process with pid 827012 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 827012 00:19:46.024 16:12:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 827012 00:19:46.282 16:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:46.282 16:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:46.282 16:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:46.282 16:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:46.282 16:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:46.282 16:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.282 16:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.282 16:12:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:48.825 00:19:48.825 real 0m7.809s 00:19:48.825 user 0m23.897s 00:19:48.825 sys 0m1.489s 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.825 ************************************ 00:19:48.825 END TEST nvmf_shutdown_tc2 00:19:48.825 ************************************ 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:48.825 ************************************ 00:19:48.825 START TEST nvmf_shutdown_tc3 00:19:48.825 ************************************ 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.825 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:48.826 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:48.826 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:48.826 Found net devices under 0000:09:00.0: cvl_0_0 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:48.826 Found net devices under 0000:09:00.1: cvl_0_1 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:48.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:19:48.826 00:19:48.826 --- 10.0.0.2 ping statistics --- 00:19:48.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.826 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:48.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:19:48.826 00:19:48.826 --- 10.0.0.1 ping statistics --- 00:19:48.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.826 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=828092 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 828092 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 828092 ']' 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.826 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:48.826 [2024-07-15 16:12:34.530015] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:19:48.826 [2024-07-15 16:12:34.530090] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.826 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.826 [2024-07-15 16:12:34.599934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:48.826 [2024-07-15 16:12:34.711592] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.826 [2024-07-15 16:12:34.711651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.826 [2024-07-15 16:12:34.711673] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.826 [2024-07-15 16:12:34.711684] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.826 [2024-07-15 16:12:34.711694] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.826 [2024-07-15 16:12:34.711793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.826 [2024-07-15 16:12:34.711851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.826 [2024-07-15 16:12:34.711917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:48.827 [2024-07-15 16:12:34.711920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:49.087 [2024-07-15 16:12:34.875876] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.087 16:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:49.087 Malloc1 00:19:49.087 [2024-07-15 16:12:34.964769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.087 Malloc2 00:19:49.087 Malloc3 00:19:49.087 Malloc4 00:19:49.345 Malloc5 00:19:49.345 Malloc6 00:19:49.345 Malloc7 00:19:49.345 Malloc8 00:19:49.345 Malloc9 00:19:49.604 Malloc10 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=828272 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 828272 /var/tmp/bdevperf.sock 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 828272 ']' 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.604 { 00:19:49.604 "params": { 00:19:49.604 "name": "Nvme$subsystem", 00:19:49.604 "trtype": "$TEST_TRANSPORT", 00:19:49.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.604 "adrfam": "ipv4", 00:19:49.604 "trsvcid": "$NVMF_PORT", 00:19:49.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.604 "hdgst": ${hdgst:-false}, 00:19:49.604 "ddgst": ${ddgst:-false} 00:19:49.604 }, 00:19:49.604 "method": "bdev_nvme_attach_controller" 00:19:49.604 } 00:19:49.604 EOF 00:19:49.604 )") 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.604 { 00:19:49.604 "params": { 00:19:49.604 "name": "Nvme$subsystem", 00:19:49.604 "trtype": "$TEST_TRANSPORT", 00:19:49.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.604 "adrfam": "ipv4", 00:19:49.604 "trsvcid": "$NVMF_PORT", 00:19:49.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.604 "hdgst": ${hdgst:-false}, 00:19:49.604 "ddgst": ${ddgst:-false} 00:19:49.604 }, 00:19:49.604 "method": "bdev_nvme_attach_controller" 00:19:49.604 } 00:19:49.604 EOF 00:19:49.604 )") 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.604 { 00:19:49.604 "params": { 00:19:49.604 "name": "Nvme$subsystem", 00:19:49.604 "trtype": "$TEST_TRANSPORT", 00:19:49.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.604 "adrfam": "ipv4", 00:19:49.604 "trsvcid": "$NVMF_PORT", 00:19:49.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.604 "hdgst": ${hdgst:-false}, 00:19:49.604 "ddgst": ${ddgst:-false} 00:19:49.604 }, 00:19:49.604 "method": "bdev_nvme_attach_controller" 00:19:49.604 } 00:19:49.604 EOF 00:19:49.604 )") 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.604 { 00:19:49.604 "params": { 00:19:49.604 "name": "Nvme$subsystem", 00:19:49.604 "trtype": "$TEST_TRANSPORT", 00:19:49.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.604 "adrfam": "ipv4", 00:19:49.604 "trsvcid": "$NVMF_PORT", 00:19:49.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.604 "hdgst": ${hdgst:-false}, 00:19:49.604 "ddgst": ${ddgst:-false} 00:19:49.604 }, 00:19:49.604 "method": "bdev_nvme_attach_controller" 00:19:49.604 } 00:19:49.604 EOF 00:19:49.604 )") 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.604 { 00:19:49.604 "params": { 00:19:49.604 "name": "Nvme$subsystem", 00:19:49.604 "trtype": "$TEST_TRANSPORT", 00:19:49.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.604 "adrfam": "ipv4", 00:19:49.604 "trsvcid": "$NVMF_PORT", 00:19:49.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.604 "hdgst": ${hdgst:-false}, 00:19:49.604 "ddgst": ${ddgst:-false} 00:19:49.604 }, 00:19:49.604 "method": "bdev_nvme_attach_controller" 00:19:49.604 } 00:19:49.604 EOF 00:19:49.604 )") 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.604 { 00:19:49.604 "params": { 00:19:49.604 "name": "Nvme$subsystem", 00:19:49.604 "trtype": "$TEST_TRANSPORT", 00:19:49.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.604 "adrfam": "ipv4", 00:19:49.604 "trsvcid": "$NVMF_PORT", 00:19:49.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.604 "hdgst": ${hdgst:-false}, 00:19:49.604 "ddgst": ${ddgst:-false} 00:19:49.604 }, 00:19:49.604 "method": "bdev_nvme_attach_controller" 00:19:49.604 } 00:19:49.604 EOF 00:19:49.604 )") 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.604 { 00:19:49.604 "params": { 00:19:49.604 "name": "Nvme$subsystem", 00:19:49.604 "trtype": "$TEST_TRANSPORT", 00:19:49.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.604 "adrfam": "ipv4", 00:19:49.604 "trsvcid": "$NVMF_PORT", 00:19:49.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.604 "hdgst": ${hdgst:-false}, 00:19:49.604 "ddgst": ${ddgst:-false} 00:19:49.604 }, 00:19:49.604 "method": "bdev_nvme_attach_controller" 00:19:49.604 } 00:19:49.604 EOF 00:19:49.604 )") 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.604 { 00:19:49.604 "params": { 00:19:49.604 "name": "Nvme$subsystem", 00:19:49.604 "trtype": "$TEST_TRANSPORT", 00:19:49.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.604 "adrfam": "ipv4", 00:19:49.604 "trsvcid": "$NVMF_PORT", 00:19:49.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.604 "hdgst": ${hdgst:-false}, 00:19:49.604 "ddgst": ${ddgst:-false} 00:19:49.604 }, 00:19:49.604 "method": "bdev_nvme_attach_controller" 00:19:49.604 } 00:19:49.604 EOF 00:19:49.604 )") 00:19:49.604 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:49.605 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.605 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.605 { 00:19:49.605 "params": { 00:19:49.605 "name": "Nvme$subsystem", 00:19:49.605 "trtype": "$TEST_TRANSPORT", 00:19:49.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.605 "adrfam": "ipv4", 00:19:49.605 "trsvcid": "$NVMF_PORT", 00:19:49.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.605 "hdgst": ${hdgst:-false}, 00:19:49.605 "ddgst": ${ddgst:-false} 00:19:49.605 }, 00:19:49.605 "method": "bdev_nvme_attach_controller" 00:19:49.605 } 00:19:49.605 EOF 00:19:49.605 )") 00:19:49.605 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:49.605 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.605 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.605 { 00:19:49.605 "params": { 00:19:49.605 "name": "Nvme$subsystem", 00:19:49.605 "trtype": "$TEST_TRANSPORT", 00:19:49.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.605 "adrfam": "ipv4", 00:19:49.605 "trsvcid": "$NVMF_PORT", 00:19:49.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.605 "hdgst": ${hdgst:-false}, 00:19:49.605 "ddgst": ${ddgst:-false} 00:19:49.605 }, 00:19:49.605 "method": "bdev_nvme_attach_controller" 00:19:49.605 } 00:19:49.605 EOF 00:19:49.605 )") 00:19:49.605 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:49.605 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:19:49.605 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:19:49.605 16:12:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:49.605 "params": { 00:19:49.605 "name": "Nvme1", 00:19:49.605 "trtype": "tcp", 00:19:49.605 "traddr": "10.0.0.2", 00:19:49.605 "adrfam": "ipv4", 00:19:49.605 "trsvcid": "4420", 00:19:49.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.605 "hdgst": false, 00:19:49.605 "ddgst": false 00:19:49.605 }, 00:19:49.605 "method": "bdev_nvme_attach_controller" 00:19:49.605 },{ 00:19:49.605 "params": { 00:19:49.605 "name": "Nvme2", 00:19:49.605 "trtype": "tcp", 00:19:49.605 "traddr": "10.0.0.2", 00:19:49.605 "adrfam": "ipv4", 00:19:49.605 "trsvcid": "4420", 00:19:49.605 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:49.605 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:49.605 "hdgst": false, 00:19:49.605 "ddgst": false 00:19:49.605 }, 00:19:49.605 "method": "bdev_nvme_attach_controller" 00:19:49.605 },{ 00:19:49.605 "params": { 00:19:49.605 "name": "Nvme3", 00:19:49.605 "trtype": "tcp", 00:19:49.605 "traddr": "10.0.0.2", 00:19:49.605 "adrfam": "ipv4", 00:19:49.605 "trsvcid": "4420", 00:19:49.605 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:49.605 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:49.605 "hdgst": false, 00:19:49.605 "ddgst": false 00:19:49.605 }, 00:19:49.605 "method": "bdev_nvme_attach_controller" 00:19:49.605 },{ 00:19:49.605 "params": { 00:19:49.605 "name": "Nvme4", 00:19:49.605 "trtype": "tcp", 00:19:49.605 "traddr": "10.0.0.2", 00:19:49.605 "adrfam": "ipv4", 00:19:49.605 "trsvcid": "4420", 00:19:49.605 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:49.605 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:49.605 "hdgst": false, 00:19:49.605 "ddgst": false 00:19:49.605 }, 00:19:49.605 "method": "bdev_nvme_attach_controller" 00:19:49.605 },{ 00:19:49.605 "params": { 00:19:49.605 "name": "Nvme5", 00:19:49.605 "trtype": "tcp", 00:19:49.605 "traddr": "10.0.0.2", 00:19:49.605 "adrfam": "ipv4", 00:19:49.605 "trsvcid": "4420", 00:19:49.605 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:49.605 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:49.605 "hdgst": false, 00:19:49.605 "ddgst": false 00:19:49.605 }, 00:19:49.605 "method": "bdev_nvme_attach_controller" 00:19:49.605 },{ 00:19:49.605 "params": { 00:19:49.605 "name": "Nvme6", 00:19:49.605 "trtype": "tcp", 00:19:49.605 "traddr": "10.0.0.2", 00:19:49.605 "adrfam": "ipv4", 00:19:49.605 "trsvcid": "4420", 00:19:49.605 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:49.605 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:49.605 "hdgst": false, 00:19:49.605 "ddgst": false 00:19:49.605 }, 00:19:49.605 "method": "bdev_nvme_attach_controller" 00:19:49.605 },{ 00:19:49.605 "params": { 00:19:49.605 "name": "Nvme7", 00:19:49.605 "trtype": "tcp", 00:19:49.605 "traddr": "10.0.0.2", 00:19:49.605 "adrfam": "ipv4", 00:19:49.605 "trsvcid": "4420", 00:19:49.605 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:49.605 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:49.605 "hdgst": false, 00:19:49.605 "ddgst": false 00:19:49.605 }, 00:19:49.605 "method": "bdev_nvme_attach_controller" 00:19:49.605 },{ 00:19:49.605 "params": { 00:19:49.605 "name": "Nvme8", 00:19:49.605 "trtype": "tcp", 00:19:49.605 "traddr": "10.0.0.2", 00:19:49.605 "adrfam": "ipv4", 00:19:49.605 "trsvcid": "4420", 00:19:49.605 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:49.605 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:49.605 "hdgst": false, 00:19:49.605 "ddgst": false 00:19:49.605 }, 00:19:49.605 "method": "bdev_nvme_attach_controller" 00:19:49.605 },{ 00:19:49.605 "params": { 00:19:49.605 "name": "Nvme9", 00:19:49.605 "trtype": "tcp", 00:19:49.605 "traddr": "10.0.0.2", 00:19:49.605 "adrfam": "ipv4", 00:19:49.605 "trsvcid": "4420", 00:19:49.605 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:49.605 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:49.605 "hdgst": false, 00:19:49.605 "ddgst": false 00:19:49.605 }, 00:19:49.605 "method": "bdev_nvme_attach_controller" 00:19:49.605 },{ 00:19:49.605 "params": { 00:19:49.605 "name": "Nvme10", 00:19:49.605 "trtype": "tcp", 00:19:49.605 "traddr": "10.0.0.2", 00:19:49.605 "adrfam": "ipv4", 00:19:49.605 "trsvcid": "4420", 00:19:49.605 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:49.605 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:49.605 "hdgst": false, 00:19:49.605 "ddgst": false 00:19:49.605 }, 00:19:49.605 "method": "bdev_nvme_attach_controller" 00:19:49.605 }' 00:19:49.605 [2024-07-15 16:12:35.466410] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:19:49.605 [2024-07-15 16:12:35.466496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828272 ] 00:19:49.605 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.605 [2024-07-15 16:12:35.530058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.863 [2024-07-15 16:12:35.640311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.242 Running I/O for 10 seconds... 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:51.500 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:51.758 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:51.758 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:51.758 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:51.758 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:51.758 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.758 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:51.758 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.015 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:52.015 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:52.015 16:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:52.288 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:52.288 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:52.288 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=147 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 147 -ge 100 ']' 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 828092 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 828092 ']' 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 828092 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 828092 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 828092' 00:19:52.289 killing process with pid 828092 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 828092 00:19:52.289 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 828092 00:19:52.289 [2024-07-15 16:12:38.104317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b1a0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.104477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b1a0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.104495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b1a0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.104523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b1a0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.105881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.105916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.105931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.105966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.105980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.105993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.106759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dba0 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.108218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.108242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.108256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.108268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.108281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.108308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.108321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.289 [2024-07-15 16:12:38.108333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.108961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.109002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.109015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.109027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.109039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.109052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.109064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b640 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.111758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.290 [2024-07-15 16:12:38.111803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.290 [2024-07-15 16:12:38.111821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.290 [2024-07-15 16:12:38.111841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.290 [2024-07-15 16:12:38.111856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.290 [2024-07-15 16:12:38.111870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.290 [2024-07-15 16:12:38.111884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.290 [2024-07-15 16:12:38.111898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.290 [2024-07-15 16:12:38.111911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4e390 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.112029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.290 [2024-07-15 16:12:38.112052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.290 [2024-07-15 16:12:38.112068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.290 [2024-07-15 16:12:38.112083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.290 [2024-07-15 16:12:38.112100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.290 [2024-07-15 16:12:38.112114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.290 [2024-07-15 16:12:38.112129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.290 [2024-07-15 16:12:38.112143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.290 [2024-07-15 16:12:38.112167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7e240 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.112222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.290 [2024-07-15 16:12:38.112243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.290 [2024-07-15 16:12:38.112258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.290 [2024-07-15 16:12:38.112271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.290 [2024-07-15 16:12:38.112285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.290 [2024-07-15 16:12:38.112298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.290 [2024-07-15 16:12:38.112311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.290 [2024-07-15 16:12:38.112325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.290 [2024-07-15 16:12:38.112338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b2830 is same with the state(5) to be set 00:19:52.290 [2024-07-15 16:12:38.112381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.290 [2024-07-15 16:12:38.112401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.290 [2024-07-15 16:12:38.112421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.290 [2024-07-15 16:12:38.112435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.290 [2024-07-15 16:12:38.112448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.290 [2024-07-15 16:12:38.112461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.290 [2024-07-15 16:12:38.112475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.291 [2024-07-15 16:12:38.112488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.112501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9de450 is same with the state(5) to be set 00:19:52.291 [2024-07-15 16:12:38.112822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.112847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.112876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.112892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.112909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.112924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.112941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.112963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.112982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1[2024-07-15 16:12:38.113707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138bfa0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 the state(5) to be set 00:19:52.291 [2024-07-15 16:12:38.113735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138bfa0 is same with the state(5) to be set 00:19:52.291 [2024-07-15 16:12:38.113750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.113965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.113981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.114000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.114014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.114029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.114043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.114058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.114071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.114087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.291 [2024-07-15 16:12:38.114100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.291 [2024-07-15 16:12:38.114116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:12[2024-07-15 16:12:38.114728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:12:38.114741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 [2024-07-15 16:12:38.114792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:12[2024-07-15 16:12:38.114817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.292 the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with [2024-07-15 16:12:38.114831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:52.292 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.292 [2024-07-15 16:12:38.114845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:19:52.292 [2024-07-15 16:12:38.114893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.114979] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa46e70 was disconnected and freed. reset controller. 00:19:52.292 [2024-07-15 16:12:38.114991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.115007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.115018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.115030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.115043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.115055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.115068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.115082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.115095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.115107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.115120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.115133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.292 [2024-07-15 16:12:38.115239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.115334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.115372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with [2024-07-15 16:12:38.115384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:12the state(5) to be set 00:19:52.293 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with [2024-07-15 16:12:38.115401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:52.293 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.115414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.115439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:12:38.115464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:12:38.115504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.115559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.115585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:1[2024-07-15 16:12:38.115597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with [2024-07-15 16:12:38.115610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:52.293 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.115624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:1[2024-07-15 16:12:38.115636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with [2024-07-15 16:12:38.115650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:52.293 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.115664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c440 is same with the state(5) to be set 00:19:52.293 [2024-07-15 16:12:38.115668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.115698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.115730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.115759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.115810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.115840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.115869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.115898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.115928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.115964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.115981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.116006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.293 [2024-07-15 16:12:38.116020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.293 [2024-07-15 16:12:38.116035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.116887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:12:38.116898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 the state(5) to be set 00:19:52.294 [2024-07-15 16:12:38.116922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1[2024-07-15 16:12:38.116924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 the state(5) to be set 00:19:52.294 [2024-07-15 16:12:38.116938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:12:38.116938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 the state(5) to be set 00:19:52.294 [2024-07-15 16:12:38.116953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.294 [2024-07-15 16:12:38.116960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.116998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with [2024-07-15 16:12:38.116999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:52.294 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.117013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.294 [2024-07-15 16:12:38.117018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.117026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.294 [2024-07-15 16:12:38.117033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.117039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.294 [2024-07-15 16:12:38.117049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.117052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.294 [2024-07-15 16:12:38.117063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:12:38.117064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 the state(5) to be set 00:19:52.294 [2024-07-15 16:12:38.117078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.294 [2024-07-15 16:12:38.117080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.117092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.294 [2024-07-15 16:12:38.117095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.117105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.294 [2024-07-15 16:12:38.117111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.117118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.294 [2024-07-15 16:12:38.117125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.294 [2024-07-15 16:12:38.117136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.294 [2024-07-15 16:12:38.117141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.294 [2024-07-15 16:12:38.117150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.295 [2024-07-15 16:12:38.117163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.295 [2024-07-15 16:12:38.117175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.295 [2024-07-15 16:12:38.117189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with [2024-07-15 16:12:38.117202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128the state(5) to be set 00:19:52.295 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.295 [2024-07-15 16:12:38.117219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.295 [2024-07-15 16:12:38.117231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.295 [2024-07-15 16:12:38.117244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.295 [2024-07-15 16:12:38.117257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.295 [2024-07-15 16:12:38.117305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.295 [2024-07-15 16:12:38.117318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.295 [2024-07-15 16:12:38.117330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.295 [2024-07-15 16:12:38.117346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.295 [2024-07-15 16:12:38.117359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.295 [2024-07-15 16:12:38.117372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with [2024-07-15 16:12:38.117385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1the state(5) to be set 00:19:52.295 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.295 [2024-07-15 16:12:38.117399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.295 [2024-07-15 16:12:38.117412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117480] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9ac290 was disconnected and fre[2024-07-15 16:12:38.117485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with ed. reset controller. 00:19:52.295 the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.117800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138c900 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.119149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.295 [2024-07-15 16:12:38.119175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.295 [2024-07-15 16:12:38.119197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.295 [2024-07-15 16:12:38.119217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.295 [2024-07-15 16:12:38.119234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.295 [2024-07-15 16:12:38.119248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.295 [2024-07-15 16:12:38.119264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:12[2024-07-15 16:12:38.119255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.295 the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.119283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.295 [2024-07-15 16:12:38.119289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.119299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.295 [2024-07-15 16:12:38.119304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with [2024-07-15 16:12:38.119314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:52.295 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.295 [2024-07-15 16:12:38.119328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.119332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.295 [2024-07-15 16:12:38.119343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.119346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.295 [2024-07-15 16:12:38.119355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.119363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.295 [2024-07-15 16:12:38.119368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.119377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.295 [2024-07-15 16:12:38.119381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.119394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with [2024-07-15 16:12:38.119394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:1the state(5) to be set 00:19:52.295 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.295 [2024-07-15 16:12:38.119409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.295 [2024-07-15 16:12:38.119411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.295 [2024-07-15 16:12:38.119422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 [2024-07-15 16:12:38.119434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.119447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with [2024-07-15 16:12:38.119460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:1the state(5) to be set 00:19:52.296 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 [2024-07-15 16:12:38.119475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.119488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 [2024-07-15 16:12:38.119501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.119518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 [2024-07-15 16:12:38.119533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.119546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 [2024-07-15 16:12:38.119560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.119573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with [2024-07-15 16:12:38.119587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:1the state(5) to be set 00:19:52.296 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 [2024-07-15 16:12:38.119601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with [2024-07-15 16:12:38.119602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:52.296 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.119615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 [2024-07-15 16:12:38.119629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.119642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 [2024-07-15 16:12:38.119655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.119669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with [2024-07-15 16:12:38.119681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1the state(5) to be set 00:19:52.296 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 [2024-07-15 16:12:38.119697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with [2024-07-15 16:12:38.119698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:52.296 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.119715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 [2024-07-15 16:12:38.119728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.119742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 [2024-07-15 16:12:38.119771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.119785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1[2024-07-15 16:12:38.119797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:12:38.119812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 [2024-07-15 16:12:38.119838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.119851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 [2024-07-15 16:12:38.119864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.119877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1[2024-07-15 16:12:38.119889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:12:38.119904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 [2024-07-15 16:12:38.119935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.119947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.119964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 [2024-07-15 16:12:38.119984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.120001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.120006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.120013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.120024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1[2024-07-15 16:12:38.120026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.120040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with [2024-07-15 16:12:38.120039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:52.296 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.120054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.120058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 [2024-07-15 16:12:38.120068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.120072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.120081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.120088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.296 [2024-07-15 16:12:38.120095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.120103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.296 [2024-07-15 16:12:38.120108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.296 [2024-07-15 16:12:38.120122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.297 [2024-07-15 16:12:38.120124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.297 [2024-07-15 16:12:38.120143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.297 [2024-07-15 16:12:38.120160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with [2024-07-15 16:12:38.120159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1the state(5) to be set 00:19:52.297 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cda0 is same with the state(5) to be set 00:19:52.297 [2024-07-15 16:12:38.120176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.120963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.120997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.121012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.121029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.121042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.121058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.121073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.121089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.121103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.121124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.121138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.121155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.121169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.297 [2024-07-15 16:12:38.121185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.297 [2024-07-15 16:12:38.121199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.298 [2024-07-15 16:12:38.121215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.298 [2024-07-15 16:12:38.121229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.298 [2024-07-15 16:12:38.121240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121316] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb02880 was disconnected and freed. reset controller. 00:19:52.298 [2024-07-15 16:12:38.121327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.121994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.122006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.122026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.122039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.122051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.122063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.122075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d240 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.122783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:52.298 [2024-07-15 16:12:38.122819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:52.298 [2024-07-15 16:12:38.122872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4c60 (9): Bad file descriptor 00:19:52.298 [2024-07-15 16:12:38.122921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b2830 (9): Bad file descriptor 00:19:52.298 [2024-07-15 16:12:38.122941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.122974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.122974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.298 [2024-07-15 16:12:38.122989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.122995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.298 [2024-07-15 16:12:38.123002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.123011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.298 [2024-07-15 16:12:38.123014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.123025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 16:12:38.123027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.298 the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.123040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.123041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.298 [2024-07-15 16:12:38.123052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.123055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.298 [2024-07-15 16:12:38.123065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.123074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.298 [2024-07-15 16:12:38.123077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.298 [2024-07-15 16:12:38.123089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 16:12:38.123090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.298 the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with [2024-07-15 16:12:38.123104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76bb0 is same wthe state(5) to be set 00:19:52.299 ith the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4e390 (9): Bad file descriptor 00:19:52.299 [2024-07-15 16:12:38.123143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with [2024-07-15 16:12:38.123201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:19:52.299 id:0 cdw10:00000000 cdw11:00000000 00:19:52.299 [2024-07-15 16:12:38.123219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.299 [2024-07-15 16:12:38.123232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.299 [2024-07-15 16:12:38.123245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.299 [2024-07-15 16:12:38.123271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.299 [2024-07-15 16:12:38.123284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with [2024-07-15 16:12:38.123289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:19:52.299 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.299 [2024-07-15 16:12:38.123303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.299 [2024-07-15 16:12:38.123316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.299 [2024-07-15 16:12:38.123329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76990 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.299 [2024-07-15 16:12:38.123391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.299 [2024-07-15 16:12:38.123404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.299 [2024-07-15 16:12:38.123416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 16:12:38.123429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.299 the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with [2024-07-15 16:12:38.123444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:19:52.299 id:0 cdw10:00000000 cdw11:00000000 00:19:52.299 [2024-07-15 16:12:38.123457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.299 [2024-07-15 16:12:38.123470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.299 [2024-07-15 16:12:38.123483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.299 [2024-07-15 16:12:38.123498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4b4610 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with [2024-07-15 16:12:38.123545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:19:52.299 id:0 cdw10:00000000 cdw11:00000000 00:19:52.299 [2024-07-15 16:12:38.123563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.299 [2024-07-15 16:12:38.123575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.299 [2024-07-15 16:12:38.123588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.299 [2024-07-15 16:12:38.123600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-15 16:12:38.123613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with id:0 cdw10:00000000 cdw11:00000000 00:19:52.299 the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 16:12:38.123628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.299 the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.299 [2024-07-15 16:12:38.123654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.299 [2024-07-15 16:12:38.123667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5280 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7e240 (9): Bad file descriptor 00:19:52.299 [2024-07-15 16:12:38.123707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.299 [2024-07-15 16:12:38.123731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with [2024-07-15 16:12:38.123729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9de450 (9): Bthe state(5) to be set 00:19:52.299 ad file descriptor 00:19:52.300 [2024-07-15 16:12:38.123746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.300 [2024-07-15 16:12:38.123758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.300 [2024-07-15 16:12:38.123770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d6e0 is same with the state(5) to be set 00:19:52.300 [2024-07-15 16:12:38.125342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:52.300 [2024-07-15 16:12:38.126333] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:52.300 [2024-07-15 16:12:38.126469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.300 [2024-07-15 16:12:38.126499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b2830 with addr=10.0.0.2, port=4420 00:19:52.300 [2024-07-15 16:12:38.126517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b2830 is same with the state(5) to be set 00:19:52.300 [2024-07-15 16:12:38.126606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.300 [2024-07-15 16:12:38.126632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d4c60 with addr=10.0.0.2, port=4420 00:19:52.300 [2024-07-15 16:12:38.126647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4c60 is same with the state(5) to be set 00:19:52.300 [2024-07-15 16:12:38.126757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.300 [2024-07-15 16:12:38.126781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9de450 with addr=10.0.0.2, port=4420 00:19:52.300 [2024-07-15 16:12:38.126796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9de450 is same with the state(5) to be set 00:19:52.300 [2024-07-15 16:12:38.126878] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:52.300 [2024-07-15 16:12:38.127221] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:52.300 [2024-07-15 16:12:38.127487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b2830 (9): Bad file descriptor 00:19:52.300 [2024-07-15 16:12:38.127515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4c60 (9): Bad file descriptor 00:19:52.300 [2024-07-15 16:12:38.127534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9de450 (9): Bad file descriptor 00:19:52.300 [2024-07-15 16:12:38.127661] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:52.300 [2024-07-15 16:12:38.127731] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:52.300 [2024-07-15 16:12:38.127885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:52.300 [2024-07-15 16:12:38.127906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:52.300 [2024-07-15 16:12:38.127924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:52.300 [2024-07-15 16:12:38.127945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:52.300 [2024-07-15 16:12:38.127979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:52.300 [2024-07-15 16:12:38.128019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:52.300 [2024-07-15 16:12:38.128040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:52.300 [2024-07-15 16:12:38.128055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:52.300 [2024-07-15 16:12:38.128068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:52.300 [2024-07-15 16:12:38.128178] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:52.300 [2024-07-15 16:12:38.128243] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:52.300 [2024-07-15 16:12:38.128276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.300 [2024-07-15 16:12:38.128294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.300 [2024-07-15 16:12:38.128306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.300 [2024-07-15 16:12:38.132850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa76bb0 (9): Bad file descriptor 00:19:52.300 [2024-07-15 16:12:38.132974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.300 [2024-07-15 16:12:38.133023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.300 [2024-07-15 16:12:38.133062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.300 [2024-07-15 16:12:38.133090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.300 [2024-07-15 16:12:38.133119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7c350 is same with the state(5) to be set 00:19:52.300 [2024-07-15 16:12:38.133162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa76990 (9): Bad file descriptor 00:19:52.300 [2024-07-15 16:12:38.133194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4b4610 (9): Bad file descriptor 00:19:52.300 [2024-07-15 16:12:38.133225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d5280 (9): Bad file descriptor 00:19:52.300 [2024-07-15 16:12:38.133379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.133979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.133994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.134010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.134024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.134040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.134054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.300 [2024-07-15 16:12:38.134070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-07-15 16:12:38.134083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.134975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.134991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.135005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.135022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.135040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.135056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.135070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.135086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.135100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.135116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.135131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.135146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.135160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.135177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.301 [2024-07-15 16:12:38.135192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.301 [2024-07-15 16:12:38.135208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.135223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.135239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.135253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.135269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.135283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.135299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.135313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.135329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.135343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.135360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.135374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.135388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01410 is same with the state(5) to be set 00:19:52.302 [2024-07-15 16:12:38.136704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.136734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.136757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.136773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.136790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.136804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.136821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.136835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.136851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.136865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.136881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.136895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.136910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.136924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.136940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.136961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.136980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.136995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.137657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.137671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.149564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.149626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.149643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.149658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.149674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.149689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.149706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.149720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.302 [2024-07-15 16:12:38.149737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.302 [2024-07-15 16:12:38.149751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.149768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.149782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.149798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.149812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.149840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.149855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.149871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.149885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.149901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.149915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.149932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.149947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.149973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.149988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.150631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.150646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa403a0 is same with the state(5) to be set 00:19:52.303 [2024-07-15 16:12:38.152351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:52.303 [2024-07-15 16:12:38.152387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:52.303 [2024-07-15 16:12:38.152507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7c350 (9): Bad file descriptor 00:19:52.303 [2024-07-15 16:12:38.152888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.303 [2024-07-15 16:12:38.152923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7e240 with addr=10.0.0.2, port=4420 00:19:52.303 [2024-07-15 16:12:38.152941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7e240 is same with the state(5) to be set 00:19:52.303 [2024-07-15 16:12:38.153055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.303 [2024-07-15 16:12:38.153081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa4e390 with addr=10.0.0.2, port=4420 00:19:52.303 [2024-07-15 16:12:38.153097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4e390 is same with the state(5) to be set 00:19:52.303 [2024-07-15 16:12:38.153447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.153470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.153492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.153510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.153526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.153540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.153556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.153571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.153588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.153602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.153618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.153632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.153648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.153662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.153678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.153699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.153716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.153731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.303 [2024-07-15 16:12:38.153746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.303 [2024-07-15 16:12:38.153760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.153776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.153790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.153806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.153820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.153836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.153850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.153866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.153880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.153896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.153910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.153926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.153940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.153965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.153982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.153999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.154969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.154988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.155003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.155019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.155034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.155050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.155064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.304 [2024-07-15 16:12:38.155080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.304 [2024-07-15 16:12:38.155094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.155109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.155123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.155139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.155153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.155171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.155185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.155201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.155215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.155231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.155249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.155266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.155280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.155296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.155310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.155326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.155340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.155356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.155370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.155386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.155400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.155416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.155430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.155444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ad720 is same with the state(5) to be set 00:19:52.305 [2024-07-15 16:12:38.156697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.156721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.156742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.156758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.156774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.156789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.156805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.156819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.156836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.156850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.156866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.156885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.156901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.156915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.156931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.156946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.156969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.156985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.157003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.157017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.157033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.157047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.157063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.157077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.157093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.157107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.157123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.157137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.157154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.157168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.157184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.157198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.157214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.157228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.157244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.157258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.157278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.157293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.157309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.157322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.157338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.157352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.157368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.157382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.157398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.157412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.157428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.157442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.305 [2024-07-15 16:12:38.157458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.305 [2024-07-15 16:12:38.157472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.157488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.157502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.157518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.157532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.157548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.157562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.157578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.157592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.157608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.157622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.157638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.157652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.157671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.157686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.157702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.157717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.157734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.157748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.157764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.157778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.157794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.157808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.157824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.157838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.157854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.157868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.157884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.157898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.157914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.157928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.157944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.157964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.157982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.157996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.158669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.158684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aebb0 is same with the state(5) to be set 00:19:52.306 [2024-07-15 16:12:38.159925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.159950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.159980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.160003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.160019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.306 [2024-07-15 16:12:38.160033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.306 [2024-07-15 16:12:38.160049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.160978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.160993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.161008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.161022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.161039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.161053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.161069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.161083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.161100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.161114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.161130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.161144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.161160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.161175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.161191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.161205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.161221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.161238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.161255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.161269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.161286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.161299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.161315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.161329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.161345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.307 [2024-07-15 16:12:38.161359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-07-15 16:12:38.161376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.161915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.161931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dff30 is same with the state(5) to be set 00:19:52.308 [2024-07-15 16:12:38.163168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.308 [2024-07-15 16:12:38.163880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.308 [2024-07-15 16:12:38.163896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.163910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.163926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.163940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.163964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.163981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.163997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.164951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.164984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.165001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.165016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.165032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.165046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.165062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.165076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-07-15 16:12:38.165093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-07-15 16:12:38.165108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.165124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.165138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.165155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.165168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.165183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1487910 is same with the state(5) to be set 00:19:52.310 [2024-07-15 16:12:38.166697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:52.310 [2024-07-15 16:12:38.166729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:52.310 [2024-07-15 16:12:38.166749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:52.310 [2024-07-15 16:12:38.166768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:52.310 [2024-07-15 16:12:38.166784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:52.310 [2024-07-15 16:12:38.166858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7e240 (9): Bad file descriptor 00:19:52.310 [2024-07-15 16:12:38.166883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4e390 (9): Bad file descriptor 00:19:52.310 [2024-07-15 16:12:38.166949] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.310 [2024-07-15 16:12:38.166999] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.310 [2024-07-15 16:12:38.167022] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.310 [2024-07-15 16:12:38.167040] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.310 [2024-07-15 16:12:38.167137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:52.310 [2024-07-15 16:12:38.167161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:52.310 [2024-07-15 16:12:38.167393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.310 [2024-07-15 16:12:38.167423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9de450 with addr=10.0.0.2, port=4420 00:19:52.310 [2024-07-15 16:12:38.167441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9de450 is same with the state(5) to be set 00:19:52.310 [2024-07-15 16:12:38.167536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.310 [2024-07-15 16:12:38.167560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d4c60 with addr=10.0.0.2, port=4420 00:19:52.310 [2024-07-15 16:12:38.167576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4c60 is same with the state(5) to be set 00:19:52.310 [2024-07-15 16:12:38.167656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.310 [2024-07-15 16:12:38.167680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b2830 with addr=10.0.0.2, port=4420 00:19:52.310 [2024-07-15 16:12:38.167695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b2830 is same with the state(5) to be set 00:19:52.310 [2024-07-15 16:12:38.167784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.310 [2024-07-15 16:12:38.167809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d5280 with addr=10.0.0.2, port=4420 00:19:52.310 [2024-07-15 16:12:38.167825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5280 is same with the state(5) to be set 00:19:52.310 [2024-07-15 16:12:38.167909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.310 [2024-07-15 16:12:38.167933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4b4610 with addr=10.0.0.2, port=4420 00:19:52.310 [2024-07-15 16:12:38.167949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4b4610 is same with the state(5) to be set 00:19:52.310 [2024-07-15 16:12:38.167974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:52.310 [2024-07-15 16:12:38.167989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:52.310 [2024-07-15 16:12:38.168005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:52.310 [2024-07-15 16:12:38.168031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:52.310 [2024-07-15 16:12:38.168047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:52.310 [2024-07-15 16:12:38.168060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:52.310 [2024-07-15 16:12:38.169194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-07-15 16:12:38.169872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-07-15 16:12:38.169887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.169901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.169918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.169935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.169951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.169975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.169992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.170976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.170991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.171007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.171022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.171038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.171053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.171069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.171083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.171099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.171117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.171133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.171148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.171163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-07-15 16:12:38.171177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-07-15 16:12:38.171191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3eee0 is same with the state(5) to be set 00:19:52.311 [2024-07-15 16:12:38.173543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.311 [2024-07-15 16:12:38.173569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.312 task offset: 27392 on job bdev=Nvme1n1 fails 00:19:52.312 00:19:52.312 Latency(us) 00:19:52.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.312 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.312 Job: Nvme1n1 ended in about 0.89 seconds with error 00:19:52.312 Verification LBA range: start 0x0 length 0x400 00:19:52.312 Nvme1n1 : 0.89 215.37 13.46 71.79 0.00 220326.83 5412.79 240784.12 00:19:52.312 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.312 Job: Nvme2n1 ended in about 0.91 seconds with error 00:19:52.312 Verification LBA range: start 0x0 length 0x400 00:19:52.312 Nvme2n1 : 0.91 140.81 8.80 70.41 0.00 293478.65 20874.43 262532.36 00:19:52.312 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.312 Job: Nvme3n1 ended in about 0.90 seconds with error 00:19:52.312 Verification LBA range: start 0x0 length 0x400 00:19:52.312 Nvme3n1 : 0.90 213.95 13.37 71.32 0.00 212588.66 18544.26 245444.46 00:19:52.312 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.312 Job: Nvme4n1 ended in about 0.89 seconds with error 00:19:52.312 Verification LBA range: start 0x0 length 0x400 00:19:52.312 Nvme4n1 : 0.89 214.55 13.41 71.52 0.00 207363.70 10534.31 250104.79 00:19:52.312 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.312 Job: Nvme5n1 ended in about 0.93 seconds with error 00:19:52.312 Verification LBA range: start 0x0 length 0x400 00:19:52.312 Nvme5n1 : 0.93 142.09 8.88 68.89 0.00 276077.46 21942.42 285834.05 00:19:52.312 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.312 Job: Nvme6n1 ended in about 0.93 seconds with error 00:19:52.312 Verification LBA range: start 0x0 length 0x400 00:19:52.312 Nvme6n1 : 0.93 205.96 12.87 68.65 0.00 207568.40 17864.63 254765.13 00:19:52.312 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.312 Job: Nvme7n1 ended in about 0.94 seconds with error 00:19:52.312 Verification LBA range: start 0x0 length 0x400 00:19:52.312 Nvme7n1 : 0.94 136.83 8.55 68.42 0.00 271939.76 21262.79 257872.02 00:19:52.312 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.312 Job: Nvme8n1 ended in about 0.94 seconds with error 00:19:52.312 Verification LBA range: start 0x0 length 0x400 00:19:52.312 Nvme8n1 : 0.94 136.36 8.52 68.18 0.00 267135.30 19903.53 254765.13 00:19:52.312 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.312 Job: Nvme9n1 ended in about 0.94 seconds with error 00:19:52.312 Verification LBA range: start 0x0 length 0x400 00:19:52.312 Nvme9n1 : 0.94 135.49 8.47 67.75 0.00 263206.94 21748.24 264085.81 00:19:52.312 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.312 Job: Nvme10n1 ended in about 0.92 seconds with error 00:19:52.312 Verification LBA range: start 0x0 length 0x400 00:19:52.312 Nvme10n1 : 0.92 138.49 8.66 69.25 0.00 250526.28 19418.07 248551.35 00:19:52.312 =================================================================================================================== 00:19:52.312 Total : 1679.91 104.99 696.16 0.00 242957.46 5412.79 285834.05 00:19:52.312 [2024-07-15 16:12:38.202613] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:52.312 [2024-07-15 16:12:38.202712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:52.312 [2024-07-15 16:12:38.203007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.312 [2024-07-15 16:12:38.203048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa76bb0 with addr=10.0.0.2, port=4420 00:19:52.312 [2024-07-15 16:12:38.203068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76bb0 is same with the state(5) to be set 00:19:52.312 [2024-07-15 16:12:38.203180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.312 [2024-07-15 16:12:38.203207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa76990 with addr=10.0.0.2, port=4420 00:19:52.312 [2024-07-15 16:12:38.203224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76990 is same with the state(5) to be set 00:19:52.312 [2024-07-15 16:12:38.203249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9de450 (9): Bad file descriptor 00:19:52.312 [2024-07-15 16:12:38.203271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4c60 (9): Bad file descriptor 00:19:52.312 [2024-07-15 16:12:38.203291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b2830 (9): Bad file descriptor 00:19:52.312 [2024-07-15 16:12:38.203309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d5280 (9): Bad file descriptor 00:19:52.312 [2024-07-15 16:12:38.203328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4b4610 (9): Bad file descriptor 00:19:52.312 [2024-07-15 16:12:38.203604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.312 [2024-07-15 16:12:38.203635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7c350 with addr=10.0.0.2, port=4420 00:19:52.312 [2024-07-15 16:12:38.203652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7c350 is same with the state(5) to be set 00:19:52.312 [2024-07-15 16:12:38.203677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa76bb0 (9): Bad file descriptor 00:19:52.312 [2024-07-15 16:12:38.203696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa76990 (9): Bad file descriptor 00:19:52.312 [2024-07-15 16:12:38.203713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:52.312 [2024-07-15 16:12:38.203728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:52.312 [2024-07-15 16:12:38.203744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:52.312 [2024-07-15 16:12:38.203765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:52.312 [2024-07-15 16:12:38.203781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:52.312 [2024-07-15 16:12:38.203794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:52.312 [2024-07-15 16:12:38.203811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:52.312 [2024-07-15 16:12:38.203837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:52.312 [2024-07-15 16:12:38.203852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:52.312 [2024-07-15 16:12:38.203869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:52.312 [2024-07-15 16:12:38.203884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:52.312 [2024-07-15 16:12:38.203897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:52.312 [2024-07-15 16:12:38.203914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:52.312 [2024-07-15 16:12:38.203929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:52.312 [2024-07-15 16:12:38.203942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:52.312 [2024-07-15 16:12:38.203989] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.312 [2024-07-15 16:12:38.204014] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.312 [2024-07-15 16:12:38.204033] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.312 [2024-07-15 16:12:38.204052] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.312 [2024-07-15 16:12:38.204069] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.312 [2024-07-15 16:12:38.204087] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.312 [2024-07-15 16:12:38.204105] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.312 [2024-07-15 16:12:38.204493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.312 [2024-07-15 16:12:38.204517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.312 [2024-07-15 16:12:38.204530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.312 [2024-07-15 16:12:38.204541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.312 [2024-07-15 16:12:38.204553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.312 [2024-07-15 16:12:38.204578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7c350 (9): Bad file descriptor 00:19:52.312 [2024-07-15 16:12:38.204597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:52.312 [2024-07-15 16:12:38.204611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:52.312 [2024-07-15 16:12:38.204624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:52.312 [2024-07-15 16:12:38.204642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:52.312 [2024-07-15 16:12:38.204655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:52.312 [2024-07-15 16:12:38.204668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:52.312 [2024-07-15 16:12:38.204729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:52.312 [2024-07-15 16:12:38.204754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:52.312 [2024-07-15 16:12:38.204772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.312 [2024-07-15 16:12:38.204784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.312 [2024-07-15 16:12:38.204816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:52.312 [2024-07-15 16:12:38.204833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:52.312 [2024-07-15 16:12:38.204846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:52.312 [2024-07-15 16:12:38.204893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.312 [2024-07-15 16:12:38.204986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.312 [2024-07-15 16:12:38.205013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa4e390 with addr=10.0.0.2, port=4420 00:19:52.312 [2024-07-15 16:12:38.205030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4e390 is same with the state(5) to be set 00:19:52.312 [2024-07-15 16:12:38.205114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.312 [2024-07-15 16:12:38.205139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7e240 with addr=10.0.0.2, port=4420 00:19:52.312 [2024-07-15 16:12:38.205155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7e240 is same with the state(5) to be set 00:19:52.312 [2024-07-15 16:12:38.205197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4e390 (9): Bad file descriptor 00:19:52.312 [2024-07-15 16:12:38.205223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7e240 (9): Bad file descriptor 00:19:52.312 [2024-07-15 16:12:38.205263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:52.313 [2024-07-15 16:12:38.205282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:52.313 [2024-07-15 16:12:38.205296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:52.313 [2024-07-15 16:12:38.205313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:52.313 [2024-07-15 16:12:38.205327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:52.313 [2024-07-15 16:12:38.205341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:52.313 [2024-07-15 16:12:38.205377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.313 [2024-07-15 16:12:38.205394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.881 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:52.881 16:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 828272 00:19:53.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (828272) - No such process 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:53.817 rmmod nvme_tcp 00:19:53.817 rmmod nvme_fabrics 00:19:53.817 rmmod nvme_keyring 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.817 16:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.349 16:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:56.349 00:19:56.349 real 0m7.524s 00:19:56.349 user 0m18.270s 00:19:56.349 sys 0m1.470s 00:19:56.349 16:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:56.349 16:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:56.349 ************************************ 00:19:56.350 END TEST nvmf_shutdown_tc3 00:19:56.350 ************************************ 00:19:56.350 16:12:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:56.350 16:12:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:19:56.350 00:19:56.350 real 0m27.560s 00:19:56.350 user 1m16.725s 00:19:56.350 sys 0m6.473s 00:19:56.350 16:12:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:56.350 16:12:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:56.350 ************************************ 00:19:56.350 END TEST nvmf_shutdown 00:19:56.350 ************************************ 00:19:56.350 16:12:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:56.350 16:12:41 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:19:56.350 16:12:41 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:56.350 16:12:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:56.350 16:12:41 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:19:56.350 16:12:41 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.350 16:12:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:56.350 16:12:41 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:19:56.350 16:12:41 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:56.350 16:12:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:56.350 16:12:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:56.350 16:12:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:56.350 ************************************ 00:19:56.350 START TEST nvmf_multicontroller 00:19:56.350 ************************************ 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:56.350 * Looking for test storage... 00:19:56.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.350 16:12:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:56.350 16:12:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:56.350 16:12:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:56.350 16:12:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.350 16:12:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.350 16:12:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.350 16:12:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:56.350 16:12:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:56.350 16:12:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:19:56.350 16:12:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:58.248 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:58.248 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:58.248 Found net devices under 0000:09:00.0: cvl_0_0 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:58.248 Found net devices under 0000:09:00.1: cvl_0_1 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.248 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:58.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:19:58.249 00:19:58.249 --- 10.0.0.2 ping statistics --- 00:19:58.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.249 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:19:58.249 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:19:58.249 00:19:58.249 --- 10.0.0.1 ping statistics --- 00:19:58.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.249 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:19:58.249 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.249 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:19:58.249 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:58.249 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.249 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:58.249 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:58.249 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.249 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:58.249 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:58.507 16:12:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:58.507 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:58.507 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:58.507 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.507 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=830794 00:19:58.507 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:58.507 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 830794 00:19:58.507 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 830794 ']' 00:19:58.507 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.507 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.507 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.507 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.507 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.507 [2024-07-15 16:12:44.309385] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:19:58.507 [2024-07-15 16:12:44.309456] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.507 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.507 [2024-07-15 16:12:44.370139] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:58.507 [2024-07-15 16:12:44.472874] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.507 [2024-07-15 16:12:44.472935] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.507 [2024-07-15 16:12:44.472962] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.507 [2024-07-15 16:12:44.472989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.507 [2024-07-15 16:12:44.472999] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.507 [2024-07-15 16:12:44.473095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.507 [2024-07-15 16:12:44.473168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:58.507 [2024-07-15 16:12:44.473169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.766 [2024-07-15 16:12:44.603540] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.766 Malloc0 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.766 [2024-07-15 16:12:44.661756] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.766 [2024-07-15 16:12:44.669622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.766 Malloc1 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=830823 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 830823 /var/tmp/bdevperf.sock 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 830823 ']' 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.766 16:12:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.337 NVMe0n1 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.337 1 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.337 request: 00:19:59.337 { 00:19:59.337 "name": "NVMe0", 00:19:59.337 "trtype": "tcp", 00:19:59.337 "traddr": "10.0.0.2", 00:19:59.337 "adrfam": "ipv4", 00:19:59.337 "trsvcid": "4420", 00:19:59.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.337 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:59.337 "hostaddr": "10.0.0.2", 00:19:59.337 "hostsvcid": "60000", 00:19:59.337 "prchk_reftag": false, 00:19:59.337 "prchk_guard": false, 00:19:59.337 "hdgst": false, 00:19:59.337 "ddgst": false, 00:19:59.337 "method": "bdev_nvme_attach_controller", 00:19:59.337 "req_id": 1 00:19:59.337 } 00:19:59.337 Got JSON-RPC error response 00:19:59.337 response: 00:19:59.337 { 00:19:59.337 "code": -114, 00:19:59.337 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:59.337 } 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.337 request: 00:19:59.337 { 00:19:59.337 "name": "NVMe0", 00:19:59.337 "trtype": "tcp", 00:19:59.337 "traddr": "10.0.0.2", 00:19:59.337 "adrfam": "ipv4", 00:19:59.337 "trsvcid": "4420", 00:19:59.337 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:59.337 "hostaddr": "10.0.0.2", 00:19:59.337 "hostsvcid": "60000", 00:19:59.337 "prchk_reftag": false, 00:19:59.337 "prchk_guard": false, 00:19:59.337 "hdgst": false, 00:19:59.337 "ddgst": false, 00:19:59.337 "method": "bdev_nvme_attach_controller", 00:19:59.337 "req_id": 1 00:19:59.337 } 00:19:59.337 Got JSON-RPC error response 00:19:59.337 response: 00:19:59.337 { 00:19:59.337 "code": -114, 00:19:59.337 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:59.337 } 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.337 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.337 request: 00:19:59.337 { 00:19:59.337 "name": "NVMe0", 00:19:59.337 "trtype": "tcp", 00:19:59.337 "traddr": "10.0.0.2", 00:19:59.337 "adrfam": "ipv4", 00:19:59.337 "trsvcid": "4420", 00:19:59.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.337 "hostaddr": "10.0.0.2", 00:19:59.337 "hostsvcid": "60000", 00:19:59.337 "prchk_reftag": false, 00:19:59.337 "prchk_guard": false, 00:19:59.337 "hdgst": false, 00:19:59.337 "ddgst": false, 00:19:59.337 "multipath": "disable", 00:19:59.338 "method": "bdev_nvme_attach_controller", 00:19:59.338 "req_id": 1 00:19:59.338 } 00:19:59.338 Got JSON-RPC error response 00:19:59.338 response: 00:19:59.338 { 00:19:59.338 "code": -114, 00:19:59.338 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:19:59.338 } 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.338 request: 00:19:59.338 { 00:19:59.338 "name": "NVMe0", 00:19:59.338 "trtype": "tcp", 00:19:59.338 "traddr": "10.0.0.2", 00:19:59.338 "adrfam": "ipv4", 00:19:59.338 "trsvcid": "4420", 00:19:59.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.338 "hostaddr": "10.0.0.2", 00:19:59.338 "hostsvcid": "60000", 00:19:59.338 "prchk_reftag": false, 00:19:59.338 "prchk_guard": false, 00:19:59.338 "hdgst": false, 00:19:59.338 "ddgst": false, 00:19:59.338 "multipath": "failover", 00:19:59.338 "method": "bdev_nvme_attach_controller", 00:19:59.338 "req_id": 1 00:19:59.338 } 00:19:59.338 Got JSON-RPC error response 00:19:59.338 response: 00:19:59.338 { 00:19:59.338 "code": -114, 00:19:59.338 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:59.338 } 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.338 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.338 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.596 00:19:59.596 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.596 16:12:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:59.596 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.596 16:12:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:59.596 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.596 16:12:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.596 16:12:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:59.596 16:12:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:00.972 0 00:20:00.972 16:12:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:00.972 16:12:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.972 16:12:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:00.972 16:12:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.972 16:12:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 830823 00:20:00.972 16:12:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 830823 ']' 00:20:00.972 16:12:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 830823 00:20:00.972 16:12:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:00.972 16:12:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.972 16:12:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 830823 00:20:00.972 16:12:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:00.972 16:12:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:00.972 16:12:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 830823' 00:20:00.972 killing process with pid 830823 00:20:00.972 16:12:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 830823 00:20:00.972 16:12:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 830823 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:01.231 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:01.231 [2024-07-15 16:12:44.776571] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:20:01.231 [2024-07-15 16:12:44.776667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid830823 ] 00:20:01.231 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.231 [2024-07-15 16:12:44.841865] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.231 [2024-07-15 16:12:44.952252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.231 [2024-07-15 16:12:45.563270] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name ae8b8553-daa6-4c11-9b9d-65873e05bb94 already exists 00:20:01.231 [2024-07-15 16:12:45.563313] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:ae8b8553-daa6-4c11-9b9d-65873e05bb94 alias for bdev NVMe1n1 00:20:01.231 [2024-07-15 16:12:45.563328] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:01.231 Running I/O for 1 seconds... 00:20:01.231 00:20:01.231 Latency(us) 00:20:01.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.231 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:01.231 NVMe0n1 : 1.01 18950.71 74.03 0.00 0.00 6743.69 1941.81 11990.66 00:20:01.231 =================================================================================================================== 00:20:01.231 Total : 18950.71 74.03 0.00 0.00 6743.69 1941.81 11990.66 00:20:01.231 Received shutdown signal, test time was about 1.000000 seconds 00:20:01.231 00:20:01.231 Latency(us) 00:20:01.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.231 =================================================================================================================== 00:20:01.231 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:01.231 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:01.231 rmmod nvme_tcp 00:20:01.231 rmmod nvme_fabrics 00:20:01.231 rmmod nvme_keyring 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 830794 ']' 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 830794 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 830794 ']' 00:20:01.231 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 830794 00:20:01.232 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:01.232 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:01.232 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 830794 00:20:01.232 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:01.232 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:01.232 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 830794' 00:20:01.232 killing process with pid 830794 00:20:01.232 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 830794 00:20:01.232 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 830794 00:20:01.490 16:12:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:01.490 16:12:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:01.490 16:12:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:01.490 16:12:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:01.490 16:12:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:01.490 16:12:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.490 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.490 16:12:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.031 16:12:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:04.031 00:20:04.031 real 0m7.569s 00:20:04.031 user 0m11.834s 00:20:04.031 sys 0m2.337s 00:20:04.031 16:12:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:04.031 16:12:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.031 ************************************ 00:20:04.031 END TEST nvmf_multicontroller 00:20:04.031 ************************************ 00:20:04.031 16:12:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:04.031 16:12:49 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:04.031 16:12:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:04.031 16:12:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:04.031 16:12:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:04.031 ************************************ 00:20:04.031 START TEST nvmf_aer 00:20:04.031 ************************************ 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:04.031 * Looking for test storage... 00:20:04.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:04.031 16:12:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:05.940 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:05.940 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:05.940 Found net devices under 0000:09:00.0: cvl_0_0 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:05.940 Found net devices under 0000:09:00.1: cvl_0_1 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:05.940 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:05.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:20:05.941 00:20:05.941 --- 10.0.0.2 ping statistics --- 00:20:05.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.941 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:05.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:05.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:20:05.941 00:20:05.941 --- 10.0.0.1 ping statistics --- 00:20:05.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.941 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=833151 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 833151 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 833151 ']' 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:05.941 16:12:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:05.941 [2024-07-15 16:12:51.841442] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:20:05.941 [2024-07-15 16:12:51.841514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.941 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.941 [2024-07-15 16:12:51.908398] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:06.201 [2024-07-15 16:12:52.020328] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.201 [2024-07-15 16:12:52.020394] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.201 [2024-07-15 16:12:52.020422] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.201 [2024-07-15 16:12:52.020433] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.201 [2024-07-15 16:12:52.020443] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.201 [2024-07-15 16:12:52.020505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.201 [2024-07-15 16:12:52.020562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.201 [2024-07-15 16:12:52.020629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.201 [2024-07-15 16:12:52.020626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:06.201 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:06.201 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:06.201 16:12:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:06.201 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:06.201 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.201 16:12:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.201 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:06.201 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.201 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.201 [2024-07-15 16:12:52.181851] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.201 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.201 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:06.201 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.201 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.462 Malloc0 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.462 [2024-07-15 16:12:52.235441] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.462 [ 00:20:06.462 { 00:20:06.462 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:06.462 "subtype": "Discovery", 00:20:06.462 "listen_addresses": [], 00:20:06.462 "allow_any_host": true, 00:20:06.462 "hosts": [] 00:20:06.462 }, 00:20:06.462 { 00:20:06.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.462 "subtype": "NVMe", 00:20:06.462 "listen_addresses": [ 00:20:06.462 { 00:20:06.462 "trtype": "TCP", 00:20:06.462 "adrfam": "IPv4", 00:20:06.462 "traddr": "10.0.0.2", 00:20:06.462 "trsvcid": "4420" 00:20:06.462 } 00:20:06.462 ], 00:20:06.462 "allow_any_host": true, 00:20:06.462 "hosts": [], 00:20:06.462 "serial_number": "SPDK00000000000001", 00:20:06.462 "model_number": "SPDK bdev Controller", 00:20:06.462 "max_namespaces": 2, 00:20:06.462 "min_cntlid": 1, 00:20:06.462 "max_cntlid": 65519, 00:20:06.462 "namespaces": [ 00:20:06.462 { 00:20:06.462 "nsid": 1, 00:20:06.462 "bdev_name": "Malloc0", 00:20:06.462 "name": "Malloc0", 00:20:06.462 "nguid": "2097D71AA64D41818F7BA2EEDA66941D", 00:20:06.462 "uuid": "2097d71a-a64d-4181-8f7b-a2eeda66941d" 00:20:06.462 } 00:20:06.462 ] 00:20:06.462 } 00:20:06.462 ] 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=833175 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:06.462 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:20:06.462 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:06.721 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:06.721 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:06.721 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.722 Malloc1 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.722 [ 00:20:06.722 { 00:20:06.722 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:06.722 "subtype": "Discovery", 00:20:06.722 "listen_addresses": [], 00:20:06.722 "allow_any_host": true, 00:20:06.722 "hosts": [] 00:20:06.722 }, 00:20:06.722 { 00:20:06.722 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.722 "subtype": "NVMe", 00:20:06.722 "listen_addresses": [ 00:20:06.722 { 00:20:06.722 "trtype": "TCP", 00:20:06.722 "adrfam": "IPv4", 00:20:06.722 "traddr": "10.0.0.2", 00:20:06.722 "trsvcid": "4420" 00:20:06.722 } 00:20:06.722 ], 00:20:06.722 "allow_any_host": true, 00:20:06.722 "hosts": [], 00:20:06.722 "serial_number": "SPDK00000000000001", 00:20:06.722 "model_number": "SPDK bdev Controller", 00:20:06.722 "max_namespaces": 2, 00:20:06.722 "min_cntlid": 1, 00:20:06.722 "max_cntlid": 65519, 00:20:06.722 "namespaces": [ 00:20:06.722 { 00:20:06.722 "nsid": 1, 00:20:06.722 "bdev_name": "Malloc0", 00:20:06.722 "name": "Malloc0", 00:20:06.722 "nguid": "2097D71AA64D41818F7BA2EEDA66941D", 00:20:06.722 "uuid": "2097d71a-a64d-4181-8f7b-a2eeda66941d" 00:20:06.722 }, 00:20:06.722 { 00:20:06.722 "nsid": 2, 00:20:06.722 "bdev_name": "Malloc1", 00:20:06.722 "name": "Malloc1", 00:20:06.722 "nguid": "25F4377E1ABA40EFA00B5390B6832BD8", 00:20:06.722 "uuid": "25f4377e-1aba-40ef-a00b-5390b6832bd8" 00:20:06.722 } 00:20:06.722 ] 00:20:06.722 } 00:20:06.722 ] 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 833175 00:20:06.722 Asynchronous Event Request test 00:20:06.722 Attaching to 10.0.0.2 00:20:06.722 Attached to 10.0.0.2 00:20:06.722 Registering asynchronous event callbacks... 00:20:06.722 Starting namespace attribute notice tests for all controllers... 00:20:06.722 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:06.722 aer_cb - Changed Namespace 00:20:06.722 Cleaning up... 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:06.722 16:12:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:06.722 rmmod nvme_tcp 00:20:06.722 rmmod nvme_fabrics 00:20:06.980 rmmod nvme_keyring 00:20:06.980 16:12:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:06.980 16:12:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:06.980 16:12:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:06.980 16:12:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 833151 ']' 00:20:06.980 16:12:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 833151 00:20:06.980 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 833151 ']' 00:20:06.980 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 833151 00:20:06.980 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:06.980 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.980 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 833151 00:20:06.980 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:06.980 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:06.980 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 833151' 00:20:06.980 killing process with pid 833151 00:20:06.980 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 833151 00:20:06.980 16:12:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 833151 00:20:07.240 16:12:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:07.240 16:12:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:07.240 16:12:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:07.240 16:12:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:07.240 16:12:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:07.240 16:12:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.240 16:12:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.240 16:12:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.144 16:12:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:09.144 00:20:09.144 real 0m5.525s 00:20:09.144 user 0m4.604s 00:20:09.144 sys 0m1.932s 00:20:09.144 16:12:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:09.144 16:12:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:09.144 ************************************ 00:20:09.144 END TEST nvmf_aer 00:20:09.144 ************************************ 00:20:09.144 16:12:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:09.144 16:12:55 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:09.144 16:12:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:09.144 16:12:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:09.144 16:12:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:09.144 ************************************ 00:20:09.144 START TEST nvmf_async_init 00:20:09.144 ************************************ 00:20:09.144 16:12:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:09.405 * Looking for test storage... 00:20:09.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6f2647c9446a4bdc9d4d5daa5bddd657 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:09.405 16:12:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:11.362 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:11.362 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:11.362 Found net devices under 0000:09:00.0: cvl_0_0 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:11.362 Found net devices under 0000:09:00.1: cvl_0_1 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.362 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:11.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:20:11.639 00:20:11.639 --- 10.0.0.2 ping statistics --- 00:20:11.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.639 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:20:11.639 00:20:11.639 --- 10.0.0.1 ping statistics --- 00:20:11.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.639 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=835229 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 835229 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 835229 ']' 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.639 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:11.639 [2024-07-15 16:12:57.454780] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:20:11.639 [2024-07-15 16:12:57.454849] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.639 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.639 [2024-07-15 16:12:57.516359] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.639 [2024-07-15 16:12:57.619223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.639 [2024-07-15 16:12:57.619299] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.639 [2024-07-15 16:12:57.619313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.639 [2024-07-15 16:12:57.619324] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.639 [2024-07-15 16:12:57.619333] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.639 [2024-07-15 16:12:57.619372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:11.901 [2024-07-15 16:12:57.757297] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:11.901 null0 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6f2647c9446a4bdc9d4d5daa5bddd657 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:11.901 [2024-07-15 16:12:57.797532] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.901 16:12:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.158 nvme0n1 00:20:12.158 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.158 16:12:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:12.158 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.158 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.158 [ 00:20:12.158 { 00:20:12.158 "name": "nvme0n1", 00:20:12.158 "aliases": [ 00:20:12.158 "6f2647c9-446a-4bdc-9d4d-5daa5bddd657" 00:20:12.158 ], 00:20:12.158 "product_name": "NVMe disk", 00:20:12.158 "block_size": 512, 00:20:12.158 "num_blocks": 2097152, 00:20:12.158 "uuid": "6f2647c9-446a-4bdc-9d4d-5daa5bddd657", 00:20:12.158 "assigned_rate_limits": { 00:20:12.158 "rw_ios_per_sec": 0, 00:20:12.158 "rw_mbytes_per_sec": 0, 00:20:12.158 "r_mbytes_per_sec": 0, 00:20:12.158 "w_mbytes_per_sec": 0 00:20:12.158 }, 00:20:12.158 "claimed": false, 00:20:12.158 "zoned": false, 00:20:12.158 "supported_io_types": { 00:20:12.158 "read": true, 00:20:12.158 "write": true, 00:20:12.158 "unmap": false, 00:20:12.158 "flush": true, 00:20:12.158 "reset": true, 00:20:12.158 "nvme_admin": true, 00:20:12.158 "nvme_io": true, 00:20:12.158 "nvme_io_md": false, 00:20:12.158 "write_zeroes": true, 00:20:12.158 "zcopy": false, 00:20:12.158 "get_zone_info": false, 00:20:12.158 "zone_management": false, 00:20:12.158 "zone_append": false, 00:20:12.158 "compare": true, 00:20:12.158 "compare_and_write": true, 00:20:12.158 "abort": true, 00:20:12.158 "seek_hole": false, 00:20:12.158 "seek_data": false, 00:20:12.158 "copy": true, 00:20:12.158 "nvme_iov_md": false 00:20:12.158 }, 00:20:12.158 "memory_domains": [ 00:20:12.158 { 00:20:12.158 "dma_device_id": "system", 00:20:12.158 "dma_device_type": 1 00:20:12.158 } 00:20:12.158 ], 00:20:12.158 "driver_specific": { 00:20:12.158 "nvme": [ 00:20:12.158 { 00:20:12.158 "trid": { 00:20:12.158 "trtype": "TCP", 00:20:12.158 "adrfam": "IPv4", 00:20:12.158 "traddr": "10.0.0.2", 00:20:12.158 "trsvcid": "4420", 00:20:12.158 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:12.158 }, 00:20:12.158 "ctrlr_data": { 00:20:12.158 "cntlid": 1, 00:20:12.158 "vendor_id": "0x8086", 00:20:12.158 "model_number": "SPDK bdev Controller", 00:20:12.158 "serial_number": "00000000000000000000", 00:20:12.158 "firmware_revision": "24.09", 00:20:12.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:12.158 "oacs": { 00:20:12.158 "security": 0, 00:20:12.158 "format": 0, 00:20:12.158 "firmware": 0, 00:20:12.158 "ns_manage": 0 00:20:12.158 }, 00:20:12.158 "multi_ctrlr": true, 00:20:12.158 "ana_reporting": false 00:20:12.158 }, 00:20:12.158 "vs": { 00:20:12.158 "nvme_version": "1.3" 00:20:12.158 }, 00:20:12.158 "ns_data": { 00:20:12.158 "id": 1, 00:20:12.158 "can_share": true 00:20:12.158 } 00:20:12.158 } 00:20:12.158 ], 00:20:12.158 "mp_policy": "active_passive" 00:20:12.158 } 00:20:12.159 } 00:20:12.159 ] 00:20:12.159 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.159 16:12:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:12.159 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.159 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.159 [2024-07-15 16:12:58.046178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:12.159 [2024-07-15 16:12:58.046269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97e090 (9): Bad file descriptor 00:20:12.416 [2024-07-15 16:12:58.178082] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.416 [ 00:20:12.416 { 00:20:12.416 "name": "nvme0n1", 00:20:12.416 "aliases": [ 00:20:12.416 "6f2647c9-446a-4bdc-9d4d-5daa5bddd657" 00:20:12.416 ], 00:20:12.416 "product_name": "NVMe disk", 00:20:12.416 "block_size": 512, 00:20:12.416 "num_blocks": 2097152, 00:20:12.416 "uuid": "6f2647c9-446a-4bdc-9d4d-5daa5bddd657", 00:20:12.416 "assigned_rate_limits": { 00:20:12.416 "rw_ios_per_sec": 0, 00:20:12.416 "rw_mbytes_per_sec": 0, 00:20:12.416 "r_mbytes_per_sec": 0, 00:20:12.416 "w_mbytes_per_sec": 0 00:20:12.416 }, 00:20:12.416 "claimed": false, 00:20:12.416 "zoned": false, 00:20:12.416 "supported_io_types": { 00:20:12.416 "read": true, 00:20:12.416 "write": true, 00:20:12.416 "unmap": false, 00:20:12.416 "flush": true, 00:20:12.416 "reset": true, 00:20:12.416 "nvme_admin": true, 00:20:12.416 "nvme_io": true, 00:20:12.416 "nvme_io_md": false, 00:20:12.416 "write_zeroes": true, 00:20:12.416 "zcopy": false, 00:20:12.416 "get_zone_info": false, 00:20:12.416 "zone_management": false, 00:20:12.416 "zone_append": false, 00:20:12.416 "compare": true, 00:20:12.416 "compare_and_write": true, 00:20:12.416 "abort": true, 00:20:12.416 "seek_hole": false, 00:20:12.416 "seek_data": false, 00:20:12.416 "copy": true, 00:20:12.416 "nvme_iov_md": false 00:20:12.416 }, 00:20:12.416 "memory_domains": [ 00:20:12.416 { 00:20:12.416 "dma_device_id": "system", 00:20:12.416 "dma_device_type": 1 00:20:12.416 } 00:20:12.416 ], 00:20:12.416 "driver_specific": { 00:20:12.416 "nvme": [ 00:20:12.416 { 00:20:12.416 "trid": { 00:20:12.416 "trtype": "TCP", 00:20:12.416 "adrfam": "IPv4", 00:20:12.416 "traddr": "10.0.0.2", 00:20:12.416 "trsvcid": "4420", 00:20:12.416 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:12.416 }, 00:20:12.416 "ctrlr_data": { 00:20:12.416 "cntlid": 2, 00:20:12.416 "vendor_id": "0x8086", 00:20:12.416 "model_number": "SPDK bdev Controller", 00:20:12.416 "serial_number": "00000000000000000000", 00:20:12.416 "firmware_revision": "24.09", 00:20:12.416 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:12.416 "oacs": { 00:20:12.416 "security": 0, 00:20:12.416 "format": 0, 00:20:12.416 "firmware": 0, 00:20:12.416 "ns_manage": 0 00:20:12.416 }, 00:20:12.416 "multi_ctrlr": true, 00:20:12.416 "ana_reporting": false 00:20:12.416 }, 00:20:12.416 "vs": { 00:20:12.416 "nvme_version": "1.3" 00:20:12.416 }, 00:20:12.416 "ns_data": { 00:20:12.416 "id": 1, 00:20:12.416 "can_share": true 00:20:12.416 } 00:20:12.416 } 00:20:12.416 ], 00:20:12.416 "mp_policy": "active_passive" 00:20:12.416 } 00:20:12.416 } 00:20:12.416 ] 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.rMnmKc2fsE 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.rMnmKc2fsE 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.416 [2024-07-15 16:12:58.222820] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.416 [2024-07-15 16:12:58.222933] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rMnmKc2fsE 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.416 [2024-07-15 16:12:58.230845] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rMnmKc2fsE 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.416 [2024-07-15 16:12:58.238870] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:12.416 [2024-07-15 16:12:58.238929] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:12.416 nvme0n1 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.416 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.416 [ 00:20:12.416 { 00:20:12.416 "name": "nvme0n1", 00:20:12.416 "aliases": [ 00:20:12.416 "6f2647c9-446a-4bdc-9d4d-5daa5bddd657" 00:20:12.416 ], 00:20:12.416 "product_name": "NVMe disk", 00:20:12.416 "block_size": 512, 00:20:12.416 "num_blocks": 2097152, 00:20:12.416 "uuid": "6f2647c9-446a-4bdc-9d4d-5daa5bddd657", 00:20:12.416 "assigned_rate_limits": { 00:20:12.416 "rw_ios_per_sec": 0, 00:20:12.416 "rw_mbytes_per_sec": 0, 00:20:12.416 "r_mbytes_per_sec": 0, 00:20:12.416 "w_mbytes_per_sec": 0 00:20:12.416 }, 00:20:12.416 "claimed": false, 00:20:12.416 "zoned": false, 00:20:12.416 "supported_io_types": { 00:20:12.416 "read": true, 00:20:12.416 "write": true, 00:20:12.416 "unmap": false, 00:20:12.417 "flush": true, 00:20:12.417 "reset": true, 00:20:12.417 "nvme_admin": true, 00:20:12.417 "nvme_io": true, 00:20:12.417 "nvme_io_md": false, 00:20:12.417 "write_zeroes": true, 00:20:12.417 "zcopy": false, 00:20:12.417 "get_zone_info": false, 00:20:12.417 "zone_management": false, 00:20:12.417 "zone_append": false, 00:20:12.417 "compare": true, 00:20:12.417 "compare_and_write": true, 00:20:12.417 "abort": true, 00:20:12.417 "seek_hole": false, 00:20:12.417 "seek_data": false, 00:20:12.417 "copy": true, 00:20:12.417 "nvme_iov_md": false 00:20:12.417 }, 00:20:12.417 "memory_domains": [ 00:20:12.417 { 00:20:12.417 "dma_device_id": "system", 00:20:12.417 "dma_device_type": 1 00:20:12.417 } 00:20:12.417 ], 00:20:12.417 "driver_specific": { 00:20:12.417 "nvme": [ 00:20:12.417 { 00:20:12.417 "trid": { 00:20:12.417 "trtype": "TCP", 00:20:12.417 "adrfam": "IPv4", 00:20:12.417 "traddr": "10.0.0.2", 00:20:12.417 "trsvcid": "4421", 00:20:12.417 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:12.417 }, 00:20:12.417 "ctrlr_data": { 00:20:12.417 "cntlid": 3, 00:20:12.417 "vendor_id": "0x8086", 00:20:12.417 "model_number": "SPDK bdev Controller", 00:20:12.417 "serial_number": "00000000000000000000", 00:20:12.417 "firmware_revision": "24.09", 00:20:12.417 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:12.417 "oacs": { 00:20:12.417 "security": 0, 00:20:12.417 "format": 0, 00:20:12.417 "firmware": 0, 00:20:12.417 "ns_manage": 0 00:20:12.417 }, 00:20:12.417 "multi_ctrlr": true, 00:20:12.417 "ana_reporting": false 00:20:12.417 }, 00:20:12.417 "vs": { 00:20:12.417 "nvme_version": "1.3" 00:20:12.417 }, 00:20:12.417 "ns_data": { 00:20:12.417 "id": 1, 00:20:12.417 "can_share": true 00:20:12.417 } 00:20:12.417 } 00:20:12.417 ], 00:20:12.417 "mp_policy": "active_passive" 00:20:12.417 } 00:20:12.417 } 00:20:12.417 ] 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.rMnmKc2fsE 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:12.417 rmmod nvme_tcp 00:20:12.417 rmmod nvme_fabrics 00:20:12.417 rmmod nvme_keyring 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 835229 ']' 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 835229 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 835229 ']' 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 835229 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 835229 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 835229' 00:20:12.417 killing process with pid 835229 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 835229 00:20:12.417 [2024-07-15 16:12:58.415123] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:12.417 [2024-07-15 16:12:58.415158] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:12.417 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 835229 00:20:12.676 16:12:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:12.676 16:12:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:12.676 16:12:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:12.676 16:12:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:12.676 16:12:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:12.676 16:12:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.676 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.676 16:12:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.214 16:13:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:15.214 00:20:15.214 real 0m5.571s 00:20:15.214 user 0m2.100s 00:20:15.214 sys 0m1.845s 00:20:15.214 16:13:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:15.214 16:13:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:15.214 ************************************ 00:20:15.214 END TEST nvmf_async_init 00:20:15.214 ************************************ 00:20:15.214 16:13:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:15.214 16:13:00 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:15.214 16:13:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:15.214 16:13:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.214 16:13:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:15.214 ************************************ 00:20:15.214 START TEST dma 00:20:15.214 ************************************ 00:20:15.214 16:13:00 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:15.214 * Looking for test storage... 00:20:15.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:15.214 16:13:00 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:15.214 16:13:00 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.214 16:13:00 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.214 16:13:00 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.214 16:13:00 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.214 16:13:00 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.214 16:13:00 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.214 16:13:00 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:15.214 16:13:00 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:15.214 16:13:00 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:15.214 16:13:00 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:15.214 16:13:00 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:15.214 00:20:15.214 real 0m0.073s 00:20:15.214 user 0m0.033s 00:20:15.214 sys 0m0.045s 00:20:15.214 16:13:00 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:15.214 16:13:00 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:15.214 ************************************ 00:20:15.214 END TEST dma 00:20:15.214 ************************************ 00:20:15.214 16:13:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:15.214 16:13:00 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:15.214 16:13:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:15.214 16:13:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.214 16:13:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:15.214 ************************************ 00:20:15.214 START TEST nvmf_identify 00:20:15.214 ************************************ 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:15.214 * Looking for test storage... 00:20:15.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:15.214 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:15.215 16:13:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.116 16:13:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:17.116 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:17.116 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:17.116 Found net devices under 0000:09:00.0: cvl_0_0 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.116 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:17.116 Found net devices under 0000:09:00.1: cvl_0_1 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.117 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:17.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:20:17.377 00:20:17.377 --- 10.0.0.2 ping statistics --- 00:20:17.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.377 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:20:17.377 00:20:17.377 --- 10.0.0.1 ping statistics --- 00:20:17.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.377 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=837359 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 837359 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 837359 ']' 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:17.377 16:13:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:17.377 [2024-07-15 16:13:03.206721] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:20:17.377 [2024-07-15 16:13:03.206799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.377 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.377 [2024-07-15 16:13:03.273831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:17.637 [2024-07-15 16:13:03.385061] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.637 [2024-07-15 16:13:03.385111] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.637 [2024-07-15 16:13:03.385126] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.637 [2024-07-15 16:13:03.385137] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.637 [2024-07-15 16:13:03.385148] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.637 [2024-07-15 16:13:03.385208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.637 [2024-07-15 16:13:03.385264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.637 [2024-07-15 16:13:03.385319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.637 [2024-07-15 16:13:03.385321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.204 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.204 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:18.204 16:13:04 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:18.204 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.204 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:18.204 [2024-07-15 16:13:04.190057] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.204 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.204 16:13:04 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:18.204 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:18.204 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:18.465 Malloc0 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:18.465 [2024-07-15 16:13:04.267675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:18.465 [ 00:20:18.465 { 00:20:18.465 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:18.465 "subtype": "Discovery", 00:20:18.465 "listen_addresses": [ 00:20:18.465 { 00:20:18.465 "trtype": "TCP", 00:20:18.465 "adrfam": "IPv4", 00:20:18.465 "traddr": "10.0.0.2", 00:20:18.465 "trsvcid": "4420" 00:20:18.465 } 00:20:18.465 ], 00:20:18.465 "allow_any_host": true, 00:20:18.465 "hosts": [] 00:20:18.465 }, 00:20:18.465 { 00:20:18.465 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.465 "subtype": "NVMe", 00:20:18.465 "listen_addresses": [ 00:20:18.465 { 00:20:18.465 "trtype": "TCP", 00:20:18.465 "adrfam": "IPv4", 00:20:18.465 "traddr": "10.0.0.2", 00:20:18.465 "trsvcid": "4420" 00:20:18.465 } 00:20:18.465 ], 00:20:18.465 "allow_any_host": true, 00:20:18.465 "hosts": [], 00:20:18.465 "serial_number": "SPDK00000000000001", 00:20:18.465 "model_number": "SPDK bdev Controller", 00:20:18.465 "max_namespaces": 32, 00:20:18.465 "min_cntlid": 1, 00:20:18.465 "max_cntlid": 65519, 00:20:18.465 "namespaces": [ 00:20:18.465 { 00:20:18.465 "nsid": 1, 00:20:18.465 "bdev_name": "Malloc0", 00:20:18.465 "name": "Malloc0", 00:20:18.465 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:18.465 "eui64": "ABCDEF0123456789", 00:20:18.465 "uuid": "5f1cc787-a0b4-4c4c-b5f7-9bf44bd2f6b5" 00:20:18.465 } 00:20:18.465 ] 00:20:18.465 } 00:20:18.465 ] 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.465 16:13:04 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:18.465 [2024-07-15 16:13:04.308757] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:20:18.465 [2024-07-15 16:13:04.308802] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837510 ] 00:20:18.465 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.465 [2024-07-15 16:13:04.345227] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:18.465 [2024-07-15 16:13:04.345315] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:18.465 [2024-07-15 16:13:04.345332] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:18.465 [2024-07-15 16:13:04.345348] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:18.465 [2024-07-15 16:13:04.345358] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:18.465 [2024-07-15 16:13:04.345654] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:18.465 [2024-07-15 16:13:04.345722] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1224540 0 00:20:18.465 [2024-07-15 16:13:04.351972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:18.465 [2024-07-15 16:13:04.351991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:18.465 [2024-07-15 16:13:04.351999] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:18.465 [2024-07-15 16:13:04.352005] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:18.465 [2024-07-15 16:13:04.352080] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.465 [2024-07-15 16:13:04.352094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.465 [2024-07-15 16:13:04.352101] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224540) 00:20:18.465 [2024-07-15 16:13:04.352130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:18.465 [2024-07-15 16:13:04.352156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12843c0, cid 0, qid 0 00:20:18.465 [2024-07-15 16:13:04.359974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.465 [2024-07-15 16:13:04.359992] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.465 [2024-07-15 16:13:04.359999] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.465 [2024-07-15 16:13:04.360007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12843c0) on tqpair=0x1224540 00:20:18.465 [2024-07-15 16:13:04.360027] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:18.465 [2024-07-15 16:13:04.360039] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:18.465 [2024-07-15 16:13:04.360049] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:18.465 [2024-07-15 16:13:04.360070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.465 [2024-07-15 16:13:04.360079] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.465 [2024-07-15 16:13:04.360086] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224540) 00:20:18.465 [2024-07-15 16:13:04.360097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.465 [2024-07-15 16:13:04.360120] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12843c0, cid 0, qid 0 00:20:18.465 [2024-07-15 16:13:04.360256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.465 [2024-07-15 16:13:04.360270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.465 [2024-07-15 16:13:04.360277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.465 [2024-07-15 16:13:04.360284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12843c0) on tqpair=0x1224540 00:20:18.465 [2024-07-15 16:13:04.360293] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:18.465 [2024-07-15 16:13:04.360306] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:18.465 [2024-07-15 16:13:04.360319] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.465 [2024-07-15 16:13:04.360326] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.465 [2024-07-15 16:13:04.360333] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224540) 00:20:18.465 [2024-07-15 16:13:04.360349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.465 [2024-07-15 16:13:04.360371] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12843c0, cid 0, qid 0 00:20:18.465 [2024-07-15 16:13:04.360461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.465 [2024-07-15 16:13:04.360475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.465 [2024-07-15 16:13:04.360482] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.465 [2024-07-15 16:13:04.360489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12843c0) on tqpair=0x1224540 00:20:18.465 [2024-07-15 16:13:04.360498] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:18.465 [2024-07-15 16:13:04.360512] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:18.465 [2024-07-15 16:13:04.360524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.465 [2024-07-15 16:13:04.360532] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.465 [2024-07-15 16:13:04.360539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224540) 00:20:18.465 [2024-07-15 16:13:04.360549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.465 [2024-07-15 16:13:04.360570] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12843c0, cid 0, qid 0 00:20:18.465 [2024-07-15 16:13:04.360704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.465 [2024-07-15 16:13:04.360716] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.465 [2024-07-15 16:13:04.360723] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.465 [2024-07-15 16:13:04.360730] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12843c0) on tqpair=0x1224540 00:20:18.465 [2024-07-15 16:13:04.360739] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:18.465 [2024-07-15 16:13:04.360760] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.465 [2024-07-15 16:13:04.363962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.363975] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224540) 00:20:18.466 [2024-07-15 16:13:04.363987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.466 [2024-07-15 16:13:04.364009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12843c0, cid 0, qid 0 00:20:18.466 [2024-07-15 16:13:04.364157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.466 [2024-07-15 16:13:04.364172] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.466 [2024-07-15 16:13:04.364179] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.364186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12843c0) on tqpair=0x1224540 00:20:18.466 [2024-07-15 16:13:04.364194] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:18.466 [2024-07-15 16:13:04.364203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:18.466 [2024-07-15 16:13:04.364216] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:18.466 [2024-07-15 16:13:04.364327] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:18.466 [2024-07-15 16:13:04.364335] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:18.466 [2024-07-15 16:13:04.364353] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.364362] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.364368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224540) 00:20:18.466 [2024-07-15 16:13:04.364379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.466 [2024-07-15 16:13:04.364415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12843c0, cid 0, qid 0 00:20:18.466 [2024-07-15 16:13:04.364546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.466 [2024-07-15 16:13:04.364560] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.466 [2024-07-15 16:13:04.364567] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.364574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12843c0) on tqpair=0x1224540 00:20:18.466 [2024-07-15 16:13:04.364582] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:18.466 [2024-07-15 16:13:04.364599] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.364608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.364615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224540) 00:20:18.466 [2024-07-15 16:13:04.364626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.466 [2024-07-15 16:13:04.364646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12843c0, cid 0, qid 0 00:20:18.466 [2024-07-15 16:13:04.364736] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.466 [2024-07-15 16:13:04.364750] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.466 [2024-07-15 16:13:04.364756] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.364763] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12843c0) on tqpair=0x1224540 00:20:18.466 [2024-07-15 16:13:04.364771] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:18.466 [2024-07-15 16:13:04.364779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:18.466 [2024-07-15 16:13:04.364793] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:18.466 [2024-07-15 16:13:04.364808] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:18.466 [2024-07-15 16:13:04.364823] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.364832] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224540) 00:20:18.466 [2024-07-15 16:13:04.364843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.466 [2024-07-15 16:13:04.364863] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12843c0, cid 0, qid 0 00:20:18.466 [2024-07-15 16:13:04.365055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:18.466 [2024-07-15 16:13:04.365071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:18.466 [2024-07-15 16:13:04.365078] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365084] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1224540): datao=0, datal=4096, cccid=0 00:20:18.466 [2024-07-15 16:13:04.365093] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12843c0) on tqpair(0x1224540): expected_datao=0, payload_size=4096 00:20:18.466 [2024-07-15 16:13:04.365101] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365116] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365125] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365150] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.466 [2024-07-15 16:13:04.365163] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.466 [2024-07-15 16:13:04.365169] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365176] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12843c0) on tqpair=0x1224540 00:20:18.466 [2024-07-15 16:13:04.365188] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:18.466 [2024-07-15 16:13:04.365202] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:18.466 [2024-07-15 16:13:04.365211] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:18.466 [2024-07-15 16:13:04.365219] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:18.466 [2024-07-15 16:13:04.365228] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:18.466 [2024-07-15 16:13:04.365236] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:18.466 [2024-07-15 16:13:04.365251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:18.466 [2024-07-15 16:13:04.365264] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365272] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224540) 00:20:18.466 [2024-07-15 16:13:04.365290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:18.466 [2024-07-15 16:13:04.365312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12843c0, cid 0, qid 0 00:20:18.466 [2024-07-15 16:13:04.365446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.466 [2024-07-15 16:13:04.365459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.466 [2024-07-15 16:13:04.365466] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12843c0) on tqpair=0x1224540 00:20:18.466 [2024-07-15 16:13:04.365484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224540) 00:20:18.466 [2024-07-15 16:13:04.365508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.466 [2024-07-15 16:13:04.365518] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365531] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1224540) 00:20:18.466 [2024-07-15 16:13:04.365540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.466 [2024-07-15 16:13:04.365549] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365556] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365563] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1224540) 00:20:18.466 [2024-07-15 16:13:04.365572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.466 [2024-07-15 16:13:04.365585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365593] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.466 [2024-07-15 16:13:04.365608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.466 [2024-07-15 16:13:04.365617] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:18.466 [2024-07-15 16:13:04.365636] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:18.466 [2024-07-15 16:13:04.365650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1224540) 00:20:18.466 [2024-07-15 16:13:04.365668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.466 [2024-07-15 16:13:04.365691] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12843c0, cid 0, qid 0 00:20:18.466 [2024-07-15 16:13:04.365702] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284540, cid 1, qid 0 00:20:18.466 [2024-07-15 16:13:04.365710] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12846c0, cid 2, qid 0 00:20:18.466 [2024-07-15 16:13:04.365718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.466 [2024-07-15 16:13:04.365725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12849c0, cid 4, qid 0 00:20:18.466 [2024-07-15 16:13:04.365856] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.466 [2024-07-15 16:13:04.365870] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.466 [2024-07-15 16:13:04.365877] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365884] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12849c0) on tqpair=0x1224540 00:20:18.466 [2024-07-15 16:13:04.365892] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:18.466 [2024-07-15 16:13:04.365901] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:18.466 [2024-07-15 16:13:04.365919] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.466 [2024-07-15 16:13:04.365929] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1224540) 00:20:18.466 [2024-07-15 16:13:04.365940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.466 [2024-07-15 16:13:04.365968] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12849c0, cid 4, qid 0 00:20:18.466 [2024-07-15 16:13:04.366079] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:18.467 [2024-07-15 16:13:04.366094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:18.467 [2024-07-15 16:13:04.366101] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.366107] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1224540): datao=0, datal=4096, cccid=4 00:20:18.467 [2024-07-15 16:13:04.366115] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12849c0) on tqpair(0x1224540): expected_datao=0, payload_size=4096 00:20:18.467 [2024-07-15 16:13:04.366123] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.366139] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.366148] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.407103] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.467 [2024-07-15 16:13:04.407123] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.467 [2024-07-15 16:13:04.407135] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.407143] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12849c0) on tqpair=0x1224540 00:20:18.467 [2024-07-15 16:13:04.407162] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:18.467 [2024-07-15 16:13:04.407201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.407213] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1224540) 00:20:18.467 [2024-07-15 16:13:04.407225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.467 [2024-07-15 16:13:04.407237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.407245] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.407252] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1224540) 00:20:18.467 [2024-07-15 16:13:04.407261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.467 [2024-07-15 16:13:04.407289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12849c0, cid 4, qid 0 00:20:18.467 [2024-07-15 16:13:04.407302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284b40, cid 5, qid 0 00:20:18.467 [2024-07-15 16:13:04.407685] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:18.467 [2024-07-15 16:13:04.407699] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:18.467 [2024-07-15 16:13:04.407706] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.407713] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1224540): datao=0, datal=1024, cccid=4 00:20:18.467 [2024-07-15 16:13:04.407721] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12849c0) on tqpair(0x1224540): expected_datao=0, payload_size=1024 00:20:18.467 [2024-07-15 16:13:04.407729] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.407739] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.407746] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.407756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.467 [2024-07-15 16:13:04.407765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.467 [2024-07-15 16:13:04.407771] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.407778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284b40) on tqpair=0x1224540 00:20:18.467 [2024-07-15 16:13:04.448038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.467 [2024-07-15 16:13:04.448057] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.467 [2024-07-15 16:13:04.448065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.448072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12849c0) on tqpair=0x1224540 00:20:18.467 [2024-07-15 16:13:04.448090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.448100] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1224540) 00:20:18.467 [2024-07-15 16:13:04.448111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.467 [2024-07-15 16:13:04.448141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12849c0, cid 4, qid 0 00:20:18.467 [2024-07-15 16:13:04.448301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:18.467 [2024-07-15 16:13:04.448316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:18.467 [2024-07-15 16:13:04.448323] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.448329] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1224540): datao=0, datal=3072, cccid=4 00:20:18.467 [2024-07-15 16:13:04.448342] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12849c0) on tqpair(0x1224540): expected_datao=0, payload_size=3072 00:20:18.467 [2024-07-15 16:13:04.448350] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.448360] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.448367] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.448391] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.467 [2024-07-15 16:13:04.448404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.467 [2024-07-15 16:13:04.448410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.448417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12849c0) on tqpair=0x1224540 00:20:18.467 [2024-07-15 16:13:04.448433] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.448441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1224540) 00:20:18.467 [2024-07-15 16:13:04.448452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.467 [2024-07-15 16:13:04.448480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12849c0, cid 4, qid 0 00:20:18.467 [2024-07-15 16:13:04.448632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:18.467 [2024-07-15 16:13:04.448647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:18.467 [2024-07-15 16:13:04.448654] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.448660] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1224540): datao=0, datal=8, cccid=4 00:20:18.467 [2024-07-15 16:13:04.448668] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12849c0) on tqpair(0x1224540): expected_datao=0, payload_size=8 00:20:18.467 [2024-07-15 16:13:04.448675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.448685] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:18.467 [2024-07-15 16:13:04.448692] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:18.730 [2024-07-15 16:13:04.489979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.730 [2024-07-15 16:13:04.490001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.730 [2024-07-15 16:13:04.490013] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.730 [2024-07-15 16:13:04.490020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12849c0) on tqpair=0x1224540 00:20:18.730 ===================================================== 00:20:18.730 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:18.730 ===================================================== 00:20:18.730 Controller Capabilities/Features 00:20:18.730 ================================ 00:20:18.730 Vendor ID: 0000 00:20:18.730 Subsystem Vendor ID: 0000 00:20:18.730 Serial Number: .................... 00:20:18.730 Model Number: ........................................ 00:20:18.730 Firmware Version: 24.09 00:20:18.730 Recommended Arb Burst: 0 00:20:18.730 IEEE OUI Identifier: 00 00 00 00:20:18.730 Multi-path I/O 00:20:18.730 May have multiple subsystem ports: No 00:20:18.730 May have multiple controllers: No 00:20:18.730 Associated with SR-IOV VF: No 00:20:18.730 Max Data Transfer Size: 131072 00:20:18.730 Max Number of Namespaces: 0 00:20:18.730 Max Number of I/O Queues: 1024 00:20:18.730 NVMe Specification Version (VS): 1.3 00:20:18.730 NVMe Specification Version (Identify): 1.3 00:20:18.730 Maximum Queue Entries: 128 00:20:18.730 Contiguous Queues Required: Yes 00:20:18.730 Arbitration Mechanisms Supported 00:20:18.730 Weighted Round Robin: Not Supported 00:20:18.730 Vendor Specific: Not Supported 00:20:18.730 Reset Timeout: 15000 ms 00:20:18.730 Doorbell Stride: 4 bytes 00:20:18.730 NVM Subsystem Reset: Not Supported 00:20:18.730 Command Sets Supported 00:20:18.730 NVM Command Set: Supported 00:20:18.730 Boot Partition: Not Supported 00:20:18.730 Memory Page Size Minimum: 4096 bytes 00:20:18.731 Memory Page Size Maximum: 4096 bytes 00:20:18.731 Persistent Memory Region: Not Supported 00:20:18.731 Optional Asynchronous Events Supported 00:20:18.731 Namespace Attribute Notices: Not Supported 00:20:18.731 Firmware Activation Notices: Not Supported 00:20:18.731 ANA Change Notices: Not Supported 00:20:18.731 PLE Aggregate Log Change Notices: Not Supported 00:20:18.731 LBA Status Info Alert Notices: Not Supported 00:20:18.731 EGE Aggregate Log Change Notices: Not Supported 00:20:18.731 Normal NVM Subsystem Shutdown event: Not Supported 00:20:18.731 Zone Descriptor Change Notices: Not Supported 00:20:18.731 Discovery Log Change Notices: Supported 00:20:18.731 Controller Attributes 00:20:18.731 128-bit Host Identifier: Not Supported 00:20:18.731 Non-Operational Permissive Mode: Not Supported 00:20:18.731 NVM Sets: Not Supported 00:20:18.731 Read Recovery Levels: Not Supported 00:20:18.731 Endurance Groups: Not Supported 00:20:18.731 Predictable Latency Mode: Not Supported 00:20:18.731 Traffic Based Keep ALive: Not Supported 00:20:18.731 Namespace Granularity: Not Supported 00:20:18.731 SQ Associations: Not Supported 00:20:18.731 UUID List: Not Supported 00:20:18.731 Multi-Domain Subsystem: Not Supported 00:20:18.731 Fixed Capacity Management: Not Supported 00:20:18.731 Variable Capacity Management: Not Supported 00:20:18.731 Delete Endurance Group: Not Supported 00:20:18.731 Delete NVM Set: Not Supported 00:20:18.731 Extended LBA Formats Supported: Not Supported 00:20:18.731 Flexible Data Placement Supported: Not Supported 00:20:18.731 00:20:18.731 Controller Memory Buffer Support 00:20:18.731 ================================ 00:20:18.731 Supported: No 00:20:18.731 00:20:18.731 Persistent Memory Region Support 00:20:18.731 ================================ 00:20:18.731 Supported: No 00:20:18.731 00:20:18.731 Admin Command Set Attributes 00:20:18.731 ============================ 00:20:18.731 Security Send/Receive: Not Supported 00:20:18.731 Format NVM: Not Supported 00:20:18.731 Firmware Activate/Download: Not Supported 00:20:18.731 Namespace Management: Not Supported 00:20:18.731 Device Self-Test: Not Supported 00:20:18.731 Directives: Not Supported 00:20:18.731 NVMe-MI: Not Supported 00:20:18.731 Virtualization Management: Not Supported 00:20:18.731 Doorbell Buffer Config: Not Supported 00:20:18.731 Get LBA Status Capability: Not Supported 00:20:18.731 Command & Feature Lockdown Capability: Not Supported 00:20:18.731 Abort Command Limit: 1 00:20:18.731 Async Event Request Limit: 4 00:20:18.731 Number of Firmware Slots: N/A 00:20:18.731 Firmware Slot 1 Read-Only: N/A 00:20:18.731 Firmware Activation Without Reset: N/A 00:20:18.731 Multiple Update Detection Support: N/A 00:20:18.731 Firmware Update Granularity: No Information Provided 00:20:18.731 Per-Namespace SMART Log: No 00:20:18.731 Asymmetric Namespace Access Log Page: Not Supported 00:20:18.731 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:18.731 Command Effects Log Page: Not Supported 00:20:18.731 Get Log Page Extended Data: Supported 00:20:18.731 Telemetry Log Pages: Not Supported 00:20:18.731 Persistent Event Log Pages: Not Supported 00:20:18.731 Supported Log Pages Log Page: May Support 00:20:18.731 Commands Supported & Effects Log Page: Not Supported 00:20:18.731 Feature Identifiers & Effects Log Page:May Support 00:20:18.731 NVMe-MI Commands & Effects Log Page: May Support 00:20:18.731 Data Area 4 for Telemetry Log: Not Supported 00:20:18.731 Error Log Page Entries Supported: 128 00:20:18.731 Keep Alive: Not Supported 00:20:18.731 00:20:18.731 NVM Command Set Attributes 00:20:18.731 ========================== 00:20:18.731 Submission Queue Entry Size 00:20:18.731 Max: 1 00:20:18.731 Min: 1 00:20:18.731 Completion Queue Entry Size 00:20:18.731 Max: 1 00:20:18.731 Min: 1 00:20:18.731 Number of Namespaces: 0 00:20:18.731 Compare Command: Not Supported 00:20:18.731 Write Uncorrectable Command: Not Supported 00:20:18.731 Dataset Management Command: Not Supported 00:20:18.731 Write Zeroes Command: Not Supported 00:20:18.731 Set Features Save Field: Not Supported 00:20:18.731 Reservations: Not Supported 00:20:18.731 Timestamp: Not Supported 00:20:18.731 Copy: Not Supported 00:20:18.731 Volatile Write Cache: Not Present 00:20:18.731 Atomic Write Unit (Normal): 1 00:20:18.731 Atomic Write Unit (PFail): 1 00:20:18.731 Atomic Compare & Write Unit: 1 00:20:18.731 Fused Compare & Write: Supported 00:20:18.731 Scatter-Gather List 00:20:18.731 SGL Command Set: Supported 00:20:18.731 SGL Keyed: Supported 00:20:18.731 SGL Bit Bucket Descriptor: Not Supported 00:20:18.731 SGL Metadata Pointer: Not Supported 00:20:18.731 Oversized SGL: Not Supported 00:20:18.731 SGL Metadata Address: Not Supported 00:20:18.731 SGL Offset: Supported 00:20:18.731 Transport SGL Data Block: Not Supported 00:20:18.731 Replay Protected Memory Block: Not Supported 00:20:18.731 00:20:18.731 Firmware Slot Information 00:20:18.731 ========================= 00:20:18.731 Active slot: 0 00:20:18.731 00:20:18.731 00:20:18.731 Error Log 00:20:18.731 ========= 00:20:18.731 00:20:18.731 Active Namespaces 00:20:18.731 ================= 00:20:18.731 Discovery Log Page 00:20:18.731 ================== 00:20:18.731 Generation Counter: 2 00:20:18.731 Number of Records: 2 00:20:18.731 Record Format: 0 00:20:18.731 00:20:18.731 Discovery Log Entry 0 00:20:18.731 ---------------------- 00:20:18.731 Transport Type: 3 (TCP) 00:20:18.731 Address Family: 1 (IPv4) 00:20:18.731 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:18.731 Entry Flags: 00:20:18.731 Duplicate Returned Information: 1 00:20:18.731 Explicit Persistent Connection Support for Discovery: 1 00:20:18.731 Transport Requirements: 00:20:18.731 Secure Channel: Not Required 00:20:18.731 Port ID: 0 (0x0000) 00:20:18.731 Controller ID: 65535 (0xffff) 00:20:18.731 Admin Max SQ Size: 128 00:20:18.731 Transport Service Identifier: 4420 00:20:18.731 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:18.731 Transport Address: 10.0.0.2 00:20:18.731 Discovery Log Entry 1 00:20:18.731 ---------------------- 00:20:18.731 Transport Type: 3 (TCP) 00:20:18.731 Address Family: 1 (IPv4) 00:20:18.731 Subsystem Type: 2 (NVM Subsystem) 00:20:18.731 Entry Flags: 00:20:18.731 Duplicate Returned Information: 0 00:20:18.731 Explicit Persistent Connection Support for Discovery: 0 00:20:18.731 Transport Requirements: 00:20:18.731 Secure Channel: Not Required 00:20:18.731 Port ID: 0 (0x0000) 00:20:18.731 Controller ID: 65535 (0xffff) 00:20:18.731 Admin Max SQ Size: 128 00:20:18.731 Transport Service Identifier: 4420 00:20:18.731 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:18.731 Transport Address: 10.0.0.2 [2024-07-15 16:13:04.490142] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:18.731 [2024-07-15 16:13:04.490164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12843c0) on tqpair=0x1224540 00:20:18.731 [2024-07-15 16:13:04.490176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.731 [2024-07-15 16:13:04.490185] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284540) on tqpair=0x1224540 00:20:18.731 [2024-07-15 16:13:04.490194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.731 [2024-07-15 16:13:04.490202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12846c0) on tqpair=0x1224540 00:20:18.731 [2024-07-15 16:13:04.490210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.731 [2024-07-15 16:13:04.490218] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.731 [2024-07-15 16:13:04.490226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.731 [2024-07-15 16:13:04.490244] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.731 [2024-07-15 16:13:04.490256] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.731 [2024-07-15 16:13:04.490263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.731 [2024-07-15 16:13:04.490289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.731 [2024-07-15 16:13:04.490315] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.731 [2024-07-15 16:13:04.490450] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.731 [2024-07-15 16:13:04.490466] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.731 [2024-07-15 16:13:04.490473] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.731 [2024-07-15 16:13:04.490480] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.731 [2024-07-15 16:13:04.490492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.731 [2024-07-15 16:13:04.490500] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.731 [2024-07-15 16:13:04.490507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.731 [2024-07-15 16:13:04.490517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.731 [2024-07-15 16:13:04.490545] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.731 [2024-07-15 16:13:04.490676] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.731 [2024-07-15 16:13:04.490690] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.731 [2024-07-15 16:13:04.490697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.490704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.732 [2024-07-15 16:13:04.490712] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:18.732 [2024-07-15 16:13:04.490721] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:18.732 [2024-07-15 16:13:04.490737] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.490747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.490753] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.732 [2024-07-15 16:13:04.490764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.732 [2024-07-15 16:13:04.490785] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.732 [2024-07-15 16:13:04.490915] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.732 [2024-07-15 16:13:04.490927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.732 [2024-07-15 16:13:04.490934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.490941] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.732 [2024-07-15 16:13:04.490968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.490980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.490986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.732 [2024-07-15 16:13:04.490997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.732 [2024-07-15 16:13:04.491018] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.732 [2024-07-15 16:13:04.491118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.732 [2024-07-15 16:13:04.491133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.732 [2024-07-15 16:13:04.491140] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.491147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.732 [2024-07-15 16:13:04.491168] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.491178] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.491185] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.732 [2024-07-15 16:13:04.491196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.732 [2024-07-15 16:13:04.491216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.732 [2024-07-15 16:13:04.491319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.732 [2024-07-15 16:13:04.491333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.732 [2024-07-15 16:13:04.491340] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.491347] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.732 [2024-07-15 16:13:04.491363] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.491373] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.491380] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.732 [2024-07-15 16:13:04.491390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.732 [2024-07-15 16:13:04.491411] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.732 [2024-07-15 16:13:04.491496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.732 [2024-07-15 16:13:04.491510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.732 [2024-07-15 16:13:04.491517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.491524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.732 [2024-07-15 16:13:04.491540] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.491550] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.491557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.732 [2024-07-15 16:13:04.491567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.732 [2024-07-15 16:13:04.491588] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.732 [2024-07-15 16:13:04.491725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.732 [2024-07-15 16:13:04.491738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.732 [2024-07-15 16:13:04.491745] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.491752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.732 [2024-07-15 16:13:04.491768] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.491777] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.491784] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.732 [2024-07-15 16:13:04.491795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.732 [2024-07-15 16:13:04.491815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.732 [2024-07-15 16:13:04.491942] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.732 [2024-07-15 16:13:04.491962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.732 [2024-07-15 16:13:04.491971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.491978] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.732 [2024-07-15 16:13:04.491999] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.492010] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.492016] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.732 [2024-07-15 16:13:04.492027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.732 [2024-07-15 16:13:04.492048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.732 [2024-07-15 16:13:04.492182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.732 [2024-07-15 16:13:04.492196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.732 [2024-07-15 16:13:04.492203] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.492210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.732 [2024-07-15 16:13:04.492226] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.492236] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.492243] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.732 [2024-07-15 16:13:04.492253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.732 [2024-07-15 16:13:04.492274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.732 [2024-07-15 16:13:04.492359] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.732 [2024-07-15 16:13:04.492373] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.732 [2024-07-15 16:13:04.492380] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.492387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.732 [2024-07-15 16:13:04.492403] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.492413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.492420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.732 [2024-07-15 16:13:04.492430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.732 [2024-07-15 16:13:04.492451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.732 [2024-07-15 16:13:04.492579] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.732 [2024-07-15 16:13:04.492592] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.732 [2024-07-15 16:13:04.492599] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.492606] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.732 [2024-07-15 16:13:04.492621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.492631] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.492637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.732 [2024-07-15 16:13:04.492648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.732 [2024-07-15 16:13:04.492669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.732 [2024-07-15 16:13:04.492797] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.732 [2024-07-15 16:13:04.492809] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.732 [2024-07-15 16:13:04.492816] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.492823] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.732 [2024-07-15 16:13:04.492839] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.492852] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.492859] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.732 [2024-07-15 16:13:04.492870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.732 [2024-07-15 16:13:04.492890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.732 [2024-07-15 16:13:04.493022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.732 [2024-07-15 16:13:04.493036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.732 [2024-07-15 16:13:04.493043] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.732 [2024-07-15 16:13:04.493050] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.732 [2024-07-15 16:13:04.493067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.493077] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.493083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.733 [2024-07-15 16:13:04.493094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.733 [2024-07-15 16:13:04.493114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.733 [2024-07-15 16:13:04.493203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.733 [2024-07-15 16:13:04.493217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.733 [2024-07-15 16:13:04.493224] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.493230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.733 [2024-07-15 16:13:04.493247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.493257] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.493263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.733 [2024-07-15 16:13:04.493274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.733 [2024-07-15 16:13:04.493294] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.733 [2024-07-15 16:13:04.493422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.733 [2024-07-15 16:13:04.493434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.733 [2024-07-15 16:13:04.493441] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.493448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.733 [2024-07-15 16:13:04.493464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.493474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.493480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.733 [2024-07-15 16:13:04.493491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.733 [2024-07-15 16:13:04.493511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.733 [2024-07-15 16:13:04.493638] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.733 [2024-07-15 16:13:04.493651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.733 [2024-07-15 16:13:04.493658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.493664] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.733 [2024-07-15 16:13:04.493680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.493690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.493697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.733 [2024-07-15 16:13:04.493711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.733 [2024-07-15 16:13:04.493732] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.733 [2024-07-15 16:13:04.493859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.733 [2024-07-15 16:13:04.493872] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.733 [2024-07-15 16:13:04.493878] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.493885] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.733 [2024-07-15 16:13:04.493901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.493911] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.493918] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.733 [2024-07-15 16:13:04.493928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.733 [2024-07-15 16:13:04.493948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.733 [2024-07-15 16:13:04.497977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.733 [2024-07-15 16:13:04.497991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.733 [2024-07-15 16:13:04.497998] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.498004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.733 [2024-07-15 16:13:04.498036] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.498047] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.498054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224540) 00:20:18.733 [2024-07-15 16:13:04.498065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.733 [2024-07-15 16:13:04.498088] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1284840, cid 3, qid 0 00:20:18.733 [2024-07-15 16:13:04.498224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.733 [2024-07-15 16:13:04.498237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.733 [2024-07-15 16:13:04.498244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.498251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1284840) on tqpair=0x1224540 00:20:18.733 [2024-07-15 16:13:04.498264] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:18.733 00:20:18.733 16:13:04 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:18.733 [2024-07-15 16:13:04.535139] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:20:18.733 [2024-07-15 16:13:04.535190] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837517 ] 00:20:18.733 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.733 [2024-07-15 16:13:04.570924] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:18.733 [2024-07-15 16:13:04.571000] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:18.733 [2024-07-15 16:13:04.571015] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:18.733 [2024-07-15 16:13:04.571031] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:18.733 [2024-07-15 16:13:04.571041] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:18.733 [2024-07-15 16:13:04.574997] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:18.733 [2024-07-15 16:13:04.575054] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x16e0540 0 00:20:18.733 [2024-07-15 16:13:04.575176] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:18.733 [2024-07-15 16:13:04.575193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:18.733 [2024-07-15 16:13:04.575201] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:18.733 [2024-07-15 16:13:04.575210] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:18.733 [2024-07-15 16:13:04.575250] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.575264] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.575272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e0540) 00:20:18.733 [2024-07-15 16:13:04.575286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:18.733 [2024-07-15 16:13:04.575312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17403c0, cid 0, qid 0 00:20:18.733 [2024-07-15 16:13:04.581968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.733 [2024-07-15 16:13:04.581987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.733 [2024-07-15 16:13:04.581995] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.582002] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17403c0) on tqpair=0x16e0540 00:20:18.733 [2024-07-15 16:13:04.582017] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:18.733 [2024-07-15 16:13:04.582032] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:18.733 [2024-07-15 16:13:04.582042] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:18.733 [2024-07-15 16:13:04.582060] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.582069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.582076] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e0540) 00:20:18.733 [2024-07-15 16:13:04.582088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.733 [2024-07-15 16:13:04.582112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17403c0, cid 0, qid 0 00:20:18.733 [2024-07-15 16:13:04.582234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.733 [2024-07-15 16:13:04.582250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.733 [2024-07-15 16:13:04.582257] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.582264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17403c0) on tqpair=0x16e0540 00:20:18.733 [2024-07-15 16:13:04.582272] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:18.733 [2024-07-15 16:13:04.582287] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:18.733 [2024-07-15 16:13:04.582303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.582311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.582317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e0540) 00:20:18.733 [2024-07-15 16:13:04.582332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.733 [2024-07-15 16:13:04.582355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17403c0, cid 0, qid 0 00:20:18.733 [2024-07-15 16:13:04.582450] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.733 [2024-07-15 16:13:04.582465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.733 [2024-07-15 16:13:04.582472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.582479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17403c0) on tqpair=0x16e0540 00:20:18.733 [2024-07-15 16:13:04.582488] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:18.733 [2024-07-15 16:13:04.582503] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:18.733 [2024-07-15 16:13:04.582518] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.582526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.733 [2024-07-15 16:13:04.582532] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e0540) 00:20:18.734 [2024-07-15 16:13:04.582543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.734 [2024-07-15 16:13:04.582564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17403c0, cid 0, qid 0 00:20:18.734 [2024-07-15 16:13:04.582656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.734 [2024-07-15 16:13:04.582671] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.734 [2024-07-15 16:13:04.582678] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.582685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17403c0) on tqpair=0x16e0540 00:20:18.734 [2024-07-15 16:13:04.582696] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:18.734 [2024-07-15 16:13:04.582716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.582725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.582732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e0540) 00:20:18.734 [2024-07-15 16:13:04.582744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.734 [2024-07-15 16:13:04.582768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17403c0, cid 0, qid 0 00:20:18.734 [2024-07-15 16:13:04.582853] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.734 [2024-07-15 16:13:04.582868] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.734 [2024-07-15 16:13:04.582875] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.582882] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17403c0) on tqpair=0x16e0540 00:20:18.734 [2024-07-15 16:13:04.582891] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:18.734 [2024-07-15 16:13:04.582902] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:18.734 [2024-07-15 16:13:04.582916] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:18.734 [2024-07-15 16:13:04.583026] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:18.734 [2024-07-15 16:13:04.583038] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:18.734 [2024-07-15 16:13:04.583050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.583058] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.583069] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e0540) 00:20:18.734 [2024-07-15 16:13:04.583081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.734 [2024-07-15 16:13:04.583103] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17403c0, cid 0, qid 0 00:20:18.734 [2024-07-15 16:13:04.583225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.734 [2024-07-15 16:13:04.583241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.734 [2024-07-15 16:13:04.583248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.583255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17403c0) on tqpair=0x16e0540 00:20:18.734 [2024-07-15 16:13:04.583267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:18.734 [2024-07-15 16:13:04.583284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.583293] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.583300] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e0540) 00:20:18.734 [2024-07-15 16:13:04.583313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.734 [2024-07-15 16:13:04.583336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17403c0, cid 0, qid 0 00:20:18.734 [2024-07-15 16:13:04.583423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.734 [2024-07-15 16:13:04.583438] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.734 [2024-07-15 16:13:04.583446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.583453] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17403c0) on tqpair=0x16e0540 00:20:18.734 [2024-07-15 16:13:04.583463] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:18.734 [2024-07-15 16:13:04.583473] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:18.734 [2024-07-15 16:13:04.583487] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:18.734 [2024-07-15 16:13:04.583501] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:18.734 [2024-07-15 16:13:04.583519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.583528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e0540) 00:20:18.734 [2024-07-15 16:13:04.583539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.734 [2024-07-15 16:13:04.583561] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17403c0, cid 0, qid 0 00:20:18.734 [2024-07-15 16:13:04.583698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:18.734 [2024-07-15 16:13:04.583714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:18.734 [2024-07-15 16:13:04.583721] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.583729] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16e0540): datao=0, datal=4096, cccid=0 00:20:18.734 [2024-07-15 16:13:04.583741] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17403c0) on tqpair(0x16e0540): expected_datao=0, payload_size=4096 00:20:18.734 [2024-07-15 16:13:04.583753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.583764] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.583772] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.583788] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.734 [2024-07-15 16:13:04.583799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.734 [2024-07-15 16:13:04.583806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.583813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17403c0) on tqpair=0x16e0540 00:20:18.734 [2024-07-15 16:13:04.583824] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:18.734 [2024-07-15 16:13:04.583837] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:18.734 [2024-07-15 16:13:04.583845] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:18.734 [2024-07-15 16:13:04.583852] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:18.734 [2024-07-15 16:13:04.583860] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:18.734 [2024-07-15 16:13:04.583868] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:18.734 [2024-07-15 16:13:04.583883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:18.734 [2024-07-15 16:13:04.583898] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.583907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.583913] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e0540) 00:20:18.734 [2024-07-15 16:13:04.583924] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:18.734 [2024-07-15 16:13:04.583947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17403c0, cid 0, qid 0 00:20:18.734 [2024-07-15 16:13:04.584049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.734 [2024-07-15 16:13:04.584065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.734 [2024-07-15 16:13:04.584072] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.584080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17403c0) on tqpair=0x16e0540 00:20:18.734 [2024-07-15 16:13:04.584090] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.584098] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.584104] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e0540) 00:20:18.734 [2024-07-15 16:13:04.584114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.734 [2024-07-15 16:13:04.584124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.584131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.584138] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x16e0540) 00:20:18.734 [2024-07-15 16:13:04.584147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.734 [2024-07-15 16:13:04.584156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.584163] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.584169] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x16e0540) 00:20:18.734 [2024-07-15 16:13:04.584178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.734 [2024-07-15 16:13:04.584188] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.584194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.584201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16e0540) 00:20:18.734 [2024-07-15 16:13:04.584213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.734 [2024-07-15 16:13:04.584223] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:18.734 [2024-07-15 16:13:04.584243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:18.734 [2024-07-15 16:13:04.584258] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.734 [2024-07-15 16:13:04.584266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16e0540) 00:20:18.735 [2024-07-15 16:13:04.584276] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.735 [2024-07-15 16:13:04.584299] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17403c0, cid 0, qid 0 00:20:18.735 [2024-07-15 16:13:04.584325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740540, cid 1, qid 0 00:20:18.735 [2024-07-15 16:13:04.584333] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17406c0, cid 2, qid 0 00:20:18.735 [2024-07-15 16:13:04.584341] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740840, cid 3, qid 0 00:20:18.735 [2024-07-15 16:13:04.584348] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17409c0, cid 4, qid 0 00:20:18.735 [2024-07-15 16:13:04.584545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.735 [2024-07-15 16:13:04.584561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.735 [2024-07-15 16:13:04.584568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.584576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17409c0) on tqpair=0x16e0540 00:20:18.735 [2024-07-15 16:13:04.584587] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:18.735 [2024-07-15 16:13:04.584596] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:18.735 [2024-07-15 16:13:04.584610] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:18.735 [2024-07-15 16:13:04.584622] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:18.735 [2024-07-15 16:13:04.584636] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.584645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.584651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16e0540) 00:20:18.735 [2024-07-15 16:13:04.584662] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:18.735 [2024-07-15 16:13:04.584683] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17409c0, cid 4, qid 0 00:20:18.735 [2024-07-15 16:13:04.584798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.735 [2024-07-15 16:13:04.584813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.735 [2024-07-15 16:13:04.584820] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.584828] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17409c0) on tqpair=0x16e0540 00:20:18.735 [2024-07-15 16:13:04.584892] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:18.735 [2024-07-15 16:13:04.584913] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:18.735 [2024-07-15 16:13:04.584929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.584941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16e0540) 00:20:18.735 [2024-07-15 16:13:04.584953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.735 [2024-07-15 16:13:04.584985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17409c0, cid 4, qid 0 00:20:18.735 [2024-07-15 16:13:04.585089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:18.735 [2024-07-15 16:13:04.585109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:18.735 [2024-07-15 16:13:04.585119] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.585126] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16e0540): datao=0, datal=4096, cccid=4 00:20:18.735 [2024-07-15 16:13:04.585134] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17409c0) on tqpair(0x16e0540): expected_datao=0, payload_size=4096 00:20:18.735 [2024-07-15 16:13:04.585141] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.585166] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.585177] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.585189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.735 [2024-07-15 16:13:04.585199] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.735 [2024-07-15 16:13:04.585206] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.585213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17409c0) on tqpair=0x16e0540 00:20:18.735 [2024-07-15 16:13:04.585229] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:18.735 [2024-07-15 16:13:04.585246] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:18.735 [2024-07-15 16:13:04.585265] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:18.735 [2024-07-15 16:13:04.585282] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.585290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16e0540) 00:20:18.735 [2024-07-15 16:13:04.585301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.735 [2024-07-15 16:13:04.585323] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17409c0, cid 4, qid 0 00:20:18.735 [2024-07-15 16:13:04.585435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:18.735 [2024-07-15 16:13:04.585455] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:18.735 [2024-07-15 16:13:04.585465] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.585472] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16e0540): datao=0, datal=4096, cccid=4 00:20:18.735 [2024-07-15 16:13:04.585480] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17409c0) on tqpair(0x16e0540): expected_datao=0, payload_size=4096 00:20:18.735 [2024-07-15 16:13:04.585487] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.585509] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.585521] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.585533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.735 [2024-07-15 16:13:04.585544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.735 [2024-07-15 16:13:04.585550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.585557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17409c0) on tqpair=0x16e0540 00:20:18.735 [2024-07-15 16:13:04.585577] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:18.735 [2024-07-15 16:13:04.585601] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:18.735 [2024-07-15 16:13:04.585619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.585627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16e0540) 00:20:18.735 [2024-07-15 16:13:04.585638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.735 [2024-07-15 16:13:04.585660] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17409c0, cid 4, qid 0 00:20:18.735 [2024-07-15 16:13:04.585756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:18.735 [2024-07-15 16:13:04.585777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:18.735 [2024-07-15 16:13:04.585786] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.585793] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16e0540): datao=0, datal=4096, cccid=4 00:20:18.735 [2024-07-15 16:13:04.585800] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17409c0) on tqpair(0x16e0540): expected_datao=0, payload_size=4096 00:20:18.735 [2024-07-15 16:13:04.585810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.585833] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.585844] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.585856] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.735 [2024-07-15 16:13:04.585866] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.735 [2024-07-15 16:13:04.585873] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.735 [2024-07-15 16:13:04.585880] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17409c0) on tqpair=0x16e0540 00:20:18.735 [2024-07-15 16:13:04.585893] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:18.735 [2024-07-15 16:13:04.585909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:18.735 [2024-07-15 16:13:04.585933] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:18.735 [2024-07-15 16:13:04.585947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:18.736 [2024-07-15 16:13:04.589965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:18.736 [2024-07-15 16:13:04.589981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:18.736 [2024-07-15 16:13:04.589990] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:18.736 [2024-07-15 16:13:04.589997] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:18.736 [2024-07-15 16:13:04.590006] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:18.736 [2024-07-15 16:13:04.590024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.590032] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16e0540) 00:20:18.736 [2024-07-15 16:13:04.590043] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.736 [2024-07-15 16:13:04.590054] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.590061] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.590071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16e0540) 00:20:18.736 [2024-07-15 16:13:04.590081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.736 [2024-07-15 16:13:04.590106] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17409c0, cid 4, qid 0 00:20:18.736 [2024-07-15 16:13:04.590133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740b40, cid 5, qid 0 00:20:18.736 [2024-07-15 16:13:04.590257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.736 [2024-07-15 16:13:04.590272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.736 [2024-07-15 16:13:04.590280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.590287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17409c0) on tqpair=0x16e0540 00:20:18.736 [2024-07-15 16:13:04.590297] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.736 [2024-07-15 16:13:04.590307] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.736 [2024-07-15 16:13:04.590313] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.590320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740b40) on tqpair=0x16e0540 00:20:18.736 [2024-07-15 16:13:04.590338] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.590348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16e0540) 00:20:18.736 [2024-07-15 16:13:04.590359] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.736 [2024-07-15 16:13:04.590381] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740b40, cid 5, qid 0 00:20:18.736 [2024-07-15 16:13:04.590482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.736 [2024-07-15 16:13:04.590497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.736 [2024-07-15 16:13:04.590504] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.590511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740b40) on tqpair=0x16e0540 00:20:18.736 [2024-07-15 16:13:04.590529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.590539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16e0540) 00:20:18.736 [2024-07-15 16:13:04.590550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.736 [2024-07-15 16:13:04.590572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740b40, cid 5, qid 0 00:20:18.736 [2024-07-15 16:13:04.590679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.736 [2024-07-15 16:13:04.590693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.736 [2024-07-15 16:13:04.590700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.590710] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740b40) on tqpair=0x16e0540 00:20:18.736 [2024-07-15 16:13:04.590742] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.590752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16e0540) 00:20:18.736 [2024-07-15 16:13:04.590766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.736 [2024-07-15 16:13:04.590788] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740b40, cid 5, qid 0 00:20:18.736 [2024-07-15 16:13:04.590873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.736 [2024-07-15 16:13:04.590888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.736 [2024-07-15 16:13:04.590895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.590902] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740b40) on tqpair=0x16e0540 00:20:18.736 [2024-07-15 16:13:04.590932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.590944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16e0540) 00:20:18.736 [2024-07-15 16:13:04.590964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.736 [2024-07-15 16:13:04.590979] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.590987] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16e0540) 00:20:18.736 [2024-07-15 16:13:04.590997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.736 [2024-07-15 16:13:04.591009] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591017] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x16e0540) 00:20:18.736 [2024-07-15 16:13:04.591026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.736 [2024-07-15 16:13:04.591038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591046] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16e0540) 00:20:18.736 [2024-07-15 16:13:04.591055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.736 [2024-07-15 16:13:04.591078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740b40, cid 5, qid 0 00:20:18.736 [2024-07-15 16:13:04.591089] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17409c0, cid 4, qid 0 00:20:18.736 [2024-07-15 16:13:04.591097] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740cc0, cid 6, qid 0 00:20:18.736 [2024-07-15 16:13:04.591105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740e40, cid 7, qid 0 00:20:18.736 [2024-07-15 16:13:04.591288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:18.736 [2024-07-15 16:13:04.591304] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:18.736 [2024-07-15 16:13:04.591312] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591322] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16e0540): datao=0, datal=8192, cccid=5 00:20:18.736 [2024-07-15 16:13:04.591331] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1740b40) on tqpair(0x16e0540): expected_datao=0, payload_size=8192 00:20:18.736 [2024-07-15 16:13:04.591341] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591357] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591367] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591376] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:18.736 [2024-07-15 16:13:04.591385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:18.736 [2024-07-15 16:13:04.591392] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591398] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16e0540): datao=0, datal=512, cccid=4 00:20:18.736 [2024-07-15 16:13:04.591406] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17409c0) on tqpair(0x16e0540): expected_datao=0, payload_size=512 00:20:18.736 [2024-07-15 16:13:04.591413] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591422] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591430] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:18.736 [2024-07-15 16:13:04.591447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:18.736 [2024-07-15 16:13:04.591457] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591464] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16e0540): datao=0, datal=512, cccid=6 00:20:18.736 [2024-07-15 16:13:04.591472] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1740cc0) on tqpair(0x16e0540): expected_datao=0, payload_size=512 00:20:18.736 [2024-07-15 16:13:04.591479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591488] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591496] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591504] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:18.736 [2024-07-15 16:13:04.591513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:18.736 [2024-07-15 16:13:04.591519] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591526] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16e0540): datao=0, datal=4096, cccid=7 00:20:18.736 [2024-07-15 16:13:04.591533] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1740e40) on tqpair(0x16e0540): expected_datao=0, payload_size=4096 00:20:18.736 [2024-07-15 16:13:04.591541] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591550] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591557] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591569] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.736 [2024-07-15 16:13:04.591579] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.736 [2024-07-15 16:13:04.591586] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.736 [2024-07-15 16:13:04.591593] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740b40) on tqpair=0x16e0540 00:20:18.736 [2024-07-15 16:13:04.591628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.736 [2024-07-15 16:13:04.591639] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.736 [2024-07-15 16:13:04.591646] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.737 [2024-07-15 16:13:04.591652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17409c0) on tqpair=0x16e0540 00:20:18.737 [2024-07-15 16:13:04.591667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.737 [2024-07-15 16:13:04.591677] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.737 [2024-07-15 16:13:04.591699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.737 [2024-07-15 16:13:04.591705] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740cc0) on tqpair=0x16e0540 00:20:18.737 [2024-07-15 16:13:04.591716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.737 [2024-07-15 16:13:04.591725] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.737 [2024-07-15 16:13:04.591732] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.737 [2024-07-15 16:13:04.591738] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740e40) on tqpair=0x16e0540 00:20:18.737 ===================================================== 00:20:18.737 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:18.737 ===================================================== 00:20:18.737 Controller Capabilities/Features 00:20:18.737 ================================ 00:20:18.737 Vendor ID: 8086 00:20:18.737 Subsystem Vendor ID: 8086 00:20:18.737 Serial Number: SPDK00000000000001 00:20:18.737 Model Number: SPDK bdev Controller 00:20:18.737 Firmware Version: 24.09 00:20:18.737 Recommended Arb Burst: 6 00:20:18.737 IEEE OUI Identifier: e4 d2 5c 00:20:18.737 Multi-path I/O 00:20:18.737 May have multiple subsystem ports: Yes 00:20:18.737 May have multiple controllers: Yes 00:20:18.737 Associated with SR-IOV VF: No 00:20:18.737 Max Data Transfer Size: 131072 00:20:18.737 Max Number of Namespaces: 32 00:20:18.737 Max Number of I/O Queues: 127 00:20:18.737 NVMe Specification Version (VS): 1.3 00:20:18.737 NVMe Specification Version (Identify): 1.3 00:20:18.737 Maximum Queue Entries: 128 00:20:18.737 Contiguous Queues Required: Yes 00:20:18.737 Arbitration Mechanisms Supported 00:20:18.737 Weighted Round Robin: Not Supported 00:20:18.737 Vendor Specific: Not Supported 00:20:18.737 Reset Timeout: 15000 ms 00:20:18.737 Doorbell Stride: 4 bytes 00:20:18.737 NVM Subsystem Reset: Not Supported 00:20:18.737 Command Sets Supported 00:20:18.737 NVM Command Set: Supported 00:20:18.737 Boot Partition: Not Supported 00:20:18.737 Memory Page Size Minimum: 4096 bytes 00:20:18.737 Memory Page Size Maximum: 4096 bytes 00:20:18.737 Persistent Memory Region: Not Supported 00:20:18.737 Optional Asynchronous Events Supported 00:20:18.737 Namespace Attribute Notices: Supported 00:20:18.737 Firmware Activation Notices: Not Supported 00:20:18.737 ANA Change Notices: Not Supported 00:20:18.737 PLE Aggregate Log Change Notices: Not Supported 00:20:18.737 LBA Status Info Alert Notices: Not Supported 00:20:18.737 EGE Aggregate Log Change Notices: Not Supported 00:20:18.737 Normal NVM Subsystem Shutdown event: Not Supported 00:20:18.737 Zone Descriptor Change Notices: Not Supported 00:20:18.737 Discovery Log Change Notices: Not Supported 00:20:18.737 Controller Attributes 00:20:18.737 128-bit Host Identifier: Supported 00:20:18.737 Non-Operational Permissive Mode: Not Supported 00:20:18.737 NVM Sets: Not Supported 00:20:18.737 Read Recovery Levels: Not Supported 00:20:18.737 Endurance Groups: Not Supported 00:20:18.737 Predictable Latency Mode: Not Supported 00:20:18.737 Traffic Based Keep ALive: Not Supported 00:20:18.737 Namespace Granularity: Not Supported 00:20:18.737 SQ Associations: Not Supported 00:20:18.737 UUID List: Not Supported 00:20:18.737 Multi-Domain Subsystem: Not Supported 00:20:18.737 Fixed Capacity Management: Not Supported 00:20:18.737 Variable Capacity Management: Not Supported 00:20:18.737 Delete Endurance Group: Not Supported 00:20:18.737 Delete NVM Set: Not Supported 00:20:18.737 Extended LBA Formats Supported: Not Supported 00:20:18.737 Flexible Data Placement Supported: Not Supported 00:20:18.737 00:20:18.737 Controller Memory Buffer Support 00:20:18.737 ================================ 00:20:18.737 Supported: No 00:20:18.737 00:20:18.737 Persistent Memory Region Support 00:20:18.737 ================================ 00:20:18.737 Supported: No 00:20:18.737 00:20:18.737 Admin Command Set Attributes 00:20:18.737 ============================ 00:20:18.737 Security Send/Receive: Not Supported 00:20:18.737 Format NVM: Not Supported 00:20:18.737 Firmware Activate/Download: Not Supported 00:20:18.737 Namespace Management: Not Supported 00:20:18.737 Device Self-Test: Not Supported 00:20:18.737 Directives: Not Supported 00:20:18.737 NVMe-MI: Not Supported 00:20:18.737 Virtualization Management: Not Supported 00:20:18.737 Doorbell Buffer Config: Not Supported 00:20:18.737 Get LBA Status Capability: Not Supported 00:20:18.737 Command & Feature Lockdown Capability: Not Supported 00:20:18.737 Abort Command Limit: 4 00:20:18.737 Async Event Request Limit: 4 00:20:18.737 Number of Firmware Slots: N/A 00:20:18.737 Firmware Slot 1 Read-Only: N/A 00:20:18.737 Firmware Activation Without Reset: N/A 00:20:18.737 Multiple Update Detection Support: N/A 00:20:18.737 Firmware Update Granularity: No Information Provided 00:20:18.737 Per-Namespace SMART Log: No 00:20:18.737 Asymmetric Namespace Access Log Page: Not Supported 00:20:18.737 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:18.737 Command Effects Log Page: Supported 00:20:18.737 Get Log Page Extended Data: Supported 00:20:18.737 Telemetry Log Pages: Not Supported 00:20:18.737 Persistent Event Log Pages: Not Supported 00:20:18.737 Supported Log Pages Log Page: May Support 00:20:18.737 Commands Supported & Effects Log Page: Not Supported 00:20:18.737 Feature Identifiers & Effects Log Page:May Support 00:20:18.737 NVMe-MI Commands & Effects Log Page: May Support 00:20:18.737 Data Area 4 for Telemetry Log: Not Supported 00:20:18.737 Error Log Page Entries Supported: 128 00:20:18.737 Keep Alive: Supported 00:20:18.737 Keep Alive Granularity: 10000 ms 00:20:18.737 00:20:18.737 NVM Command Set Attributes 00:20:18.737 ========================== 00:20:18.737 Submission Queue Entry Size 00:20:18.737 Max: 64 00:20:18.737 Min: 64 00:20:18.737 Completion Queue Entry Size 00:20:18.737 Max: 16 00:20:18.737 Min: 16 00:20:18.737 Number of Namespaces: 32 00:20:18.737 Compare Command: Supported 00:20:18.737 Write Uncorrectable Command: Not Supported 00:20:18.737 Dataset Management Command: Supported 00:20:18.737 Write Zeroes Command: Supported 00:20:18.737 Set Features Save Field: Not Supported 00:20:18.737 Reservations: Supported 00:20:18.737 Timestamp: Not Supported 00:20:18.737 Copy: Supported 00:20:18.737 Volatile Write Cache: Present 00:20:18.737 Atomic Write Unit (Normal): 1 00:20:18.737 Atomic Write Unit (PFail): 1 00:20:18.737 Atomic Compare & Write Unit: 1 00:20:18.737 Fused Compare & Write: Supported 00:20:18.737 Scatter-Gather List 00:20:18.737 SGL Command Set: Supported 00:20:18.737 SGL Keyed: Supported 00:20:18.737 SGL Bit Bucket Descriptor: Not Supported 00:20:18.737 SGL Metadata Pointer: Not Supported 00:20:18.737 Oversized SGL: Not Supported 00:20:18.737 SGL Metadata Address: Not Supported 00:20:18.737 SGL Offset: Supported 00:20:18.737 Transport SGL Data Block: Not Supported 00:20:18.737 Replay Protected Memory Block: Not Supported 00:20:18.737 00:20:18.737 Firmware Slot Information 00:20:18.737 ========================= 00:20:18.737 Active slot: 1 00:20:18.737 Slot 1 Firmware Revision: 24.09 00:20:18.737 00:20:18.737 00:20:18.737 Commands Supported and Effects 00:20:18.737 ============================== 00:20:18.737 Admin Commands 00:20:18.737 -------------- 00:20:18.737 Get Log Page (02h): Supported 00:20:18.737 Identify (06h): Supported 00:20:18.737 Abort (08h): Supported 00:20:18.737 Set Features (09h): Supported 00:20:18.737 Get Features (0Ah): Supported 00:20:18.737 Asynchronous Event Request (0Ch): Supported 00:20:18.737 Keep Alive (18h): Supported 00:20:18.737 I/O Commands 00:20:18.737 ------------ 00:20:18.737 Flush (00h): Supported LBA-Change 00:20:18.737 Write (01h): Supported LBA-Change 00:20:18.737 Read (02h): Supported 00:20:18.737 Compare (05h): Supported 00:20:18.737 Write Zeroes (08h): Supported LBA-Change 00:20:18.737 Dataset Management (09h): Supported LBA-Change 00:20:18.737 Copy (19h): Supported LBA-Change 00:20:18.737 00:20:18.737 Error Log 00:20:18.737 ========= 00:20:18.737 00:20:18.737 Arbitration 00:20:18.737 =========== 00:20:18.737 Arbitration Burst: 1 00:20:18.737 00:20:18.737 Power Management 00:20:18.737 ================ 00:20:18.737 Number of Power States: 1 00:20:18.737 Current Power State: Power State #0 00:20:18.737 Power State #0: 00:20:18.737 Max Power: 0.00 W 00:20:18.737 Non-Operational State: Operational 00:20:18.737 Entry Latency: Not Reported 00:20:18.737 Exit Latency: Not Reported 00:20:18.737 Relative Read Throughput: 0 00:20:18.737 Relative Read Latency: 0 00:20:18.737 Relative Write Throughput: 0 00:20:18.737 Relative Write Latency: 0 00:20:18.737 Idle Power: Not Reported 00:20:18.737 Active Power: Not Reported 00:20:18.737 Non-Operational Permissive Mode: Not Supported 00:20:18.737 00:20:18.737 Health Information 00:20:18.737 ================== 00:20:18.737 Critical Warnings: 00:20:18.737 Available Spare Space: OK 00:20:18.737 Temperature: OK 00:20:18.737 Device Reliability: OK 00:20:18.737 Read Only: No 00:20:18.737 Volatile Memory Backup: OK 00:20:18.738 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:18.738 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:18.738 Available Spare: 0% 00:20:18.738 Available Spare Threshold: 0% 00:20:18.738 Life Percentage Used:[2024-07-15 16:13:04.591847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.591859] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16e0540) 00:20:18.738 [2024-07-15 16:13:04.591869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.738 [2024-07-15 16:13:04.591890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740e40, cid 7, qid 0 00:20:18.738 [2024-07-15 16:13:04.592092] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.738 [2024-07-15 16:13:04.592109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.738 [2024-07-15 16:13:04.592117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.592124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740e40) on tqpair=0x16e0540 00:20:18.738 [2024-07-15 16:13:04.592174] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:18.738 [2024-07-15 16:13:04.592197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17403c0) on tqpair=0x16e0540 00:20:18.738 [2024-07-15 16:13:04.592210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.738 [2024-07-15 16:13:04.592219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740540) on tqpair=0x16e0540 00:20:18.738 [2024-07-15 16:13:04.592227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.738 [2024-07-15 16:13:04.592236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17406c0) on tqpair=0x16e0540 00:20:18.738 [2024-07-15 16:13:04.592243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.738 [2024-07-15 16:13:04.592252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740840) on tqpair=0x16e0540 00:20:18.738 [2024-07-15 16:13:04.592274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.738 [2024-07-15 16:13:04.592287] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.592295] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.592301] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16e0540) 00:20:18.738 [2024-07-15 16:13:04.592312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.738 [2024-07-15 16:13:04.592333] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740840, cid 3, qid 0 00:20:18.738 [2024-07-15 16:13:04.592456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.738 [2024-07-15 16:13:04.592471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.738 [2024-07-15 16:13:04.592478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.592485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740840) on tqpair=0x16e0540 00:20:18.738 [2024-07-15 16:13:04.592500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.592509] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.592516] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16e0540) 00:20:18.738 [2024-07-15 16:13:04.592527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.738 [2024-07-15 16:13:04.592555] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740840, cid 3, qid 0 00:20:18.738 [2024-07-15 16:13:04.592649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.738 [2024-07-15 16:13:04.592664] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.738 [2024-07-15 16:13:04.592672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.592679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740840) on tqpair=0x16e0540 00:20:18.738 [2024-07-15 16:13:04.592687] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:18.738 [2024-07-15 16:13:04.592695] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:18.738 [2024-07-15 16:13:04.592713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.592724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.592730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16e0540) 00:20:18.738 [2024-07-15 16:13:04.592741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.738 [2024-07-15 16:13:04.592762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740840, cid 3, qid 0 00:20:18.738 [2024-07-15 16:13:04.592844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.738 [2024-07-15 16:13:04.592860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.738 [2024-07-15 16:13:04.592867] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.592874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740840) on tqpair=0x16e0540 00:20:18.738 [2024-07-15 16:13:04.592895] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.592905] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.592912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16e0540) 00:20:18.738 [2024-07-15 16:13:04.592923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.738 [2024-07-15 16:13:04.592947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740840, cid 3, qid 0 00:20:18.738 [2024-07-15 16:13:04.593039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.738 [2024-07-15 16:13:04.593055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.738 [2024-07-15 16:13:04.593062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.593069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740840) on tqpair=0x16e0540 00:20:18.738 [2024-07-15 16:13:04.593089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.593099] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.593106] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16e0540) 00:20:18.738 [2024-07-15 16:13:04.593117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.738 [2024-07-15 16:13:04.593141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740840, cid 3, qid 0 00:20:18.738 [2024-07-15 16:13:04.593226] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.738 [2024-07-15 16:13:04.593242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.738 [2024-07-15 16:13:04.593249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.593256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740840) on tqpair=0x16e0540 00:20:18.738 [2024-07-15 16:13:04.593275] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.593286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.593292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16e0540) 00:20:18.738 [2024-07-15 16:13:04.593303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.738 [2024-07-15 16:13:04.593324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740840, cid 3, qid 0 00:20:18.738 [2024-07-15 16:13:04.593404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.738 [2024-07-15 16:13:04.593419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.738 [2024-07-15 16:13:04.593426] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.593433] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740840) on tqpair=0x16e0540 00:20:18.738 [2024-07-15 16:13:04.593453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.593463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.593470] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16e0540) 00:20:18.738 [2024-07-15 16:13:04.593481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.738 [2024-07-15 16:13:04.593503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740840, cid 3, qid 0 00:20:18.738 [2024-07-15 16:13:04.593587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.738 [2024-07-15 16:13:04.593606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.738 [2024-07-15 16:13:04.593614] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.593622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740840) on tqpair=0x16e0540 00:20:18.738 [2024-07-15 16:13:04.593642] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.593652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.593659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16e0540) 00:20:18.738 [2024-07-15 16:13:04.593670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.738 [2024-07-15 16:13:04.593695] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740840, cid 3, qid 0 00:20:18.738 [2024-07-15 16:13:04.593781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.738 [2024-07-15 16:13:04.593796] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.738 [2024-07-15 16:13:04.593803] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.593810] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740840) on tqpair=0x16e0540 00:20:18.738 [2024-07-15 16:13:04.593829] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.593840] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.593846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16e0540) 00:20:18.738 [2024-07-15 16:13:04.593857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.738 [2024-07-15 16:13:04.593878] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740840, cid 3, qid 0 00:20:18.738 [2024-07-15 16:13:04.597972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.738 [2024-07-15 16:13:04.597989] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.738 [2024-07-15 16:13:04.597997] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.598003] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740840) on tqpair=0x16e0540 00:20:18.738 [2024-07-15 16:13:04.598022] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.598033] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.598039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16e0540) 00:20:18.738 [2024-07-15 16:13:04.598050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.738 [2024-07-15 16:13:04.598072] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1740840, cid 3, qid 0 00:20:18.738 [2024-07-15 16:13:04.598194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:18.738 [2024-07-15 16:13:04.598210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:18.738 [2024-07-15 16:13:04.598217] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:18.738 [2024-07-15 16:13:04.598225] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1740840) on tqpair=0x16e0540 00:20:18.739 [2024-07-15 16:13:04.598239] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:20:18.739 0% 00:20:18.739 Data Units Read: 0 00:20:18.739 Data Units Written: 0 00:20:18.739 Host Read Commands: 0 00:20:18.739 Host Write Commands: 0 00:20:18.739 Controller Busy Time: 0 minutes 00:20:18.739 Power Cycles: 0 00:20:18.739 Power On Hours: 0 hours 00:20:18.739 Unsafe Shutdowns: 0 00:20:18.739 Unrecoverable Media Errors: 0 00:20:18.739 Lifetime Error Log Entries: 0 00:20:18.739 Warning Temperature Time: 0 minutes 00:20:18.739 Critical Temperature Time: 0 minutes 00:20:18.739 00:20:18.739 Number of Queues 00:20:18.739 ================ 00:20:18.739 Number of I/O Submission Queues: 127 00:20:18.739 Number of I/O Completion Queues: 127 00:20:18.739 00:20:18.739 Active Namespaces 00:20:18.739 ================= 00:20:18.739 Namespace ID:1 00:20:18.739 Error Recovery Timeout: Unlimited 00:20:18.739 Command Set Identifier: NVM (00h) 00:20:18.739 Deallocate: Supported 00:20:18.739 Deallocated/Unwritten Error: Not Supported 00:20:18.739 Deallocated Read Value: Unknown 00:20:18.739 Deallocate in Write Zeroes: Not Supported 00:20:18.739 Deallocated Guard Field: 0xFFFF 00:20:18.739 Flush: Supported 00:20:18.739 Reservation: Supported 00:20:18.739 Namespace Sharing Capabilities: Multiple Controllers 00:20:18.739 Size (in LBAs): 131072 (0GiB) 00:20:18.739 Capacity (in LBAs): 131072 (0GiB) 00:20:18.739 Utilization (in LBAs): 131072 (0GiB) 00:20:18.739 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:18.739 EUI64: ABCDEF0123456789 00:20:18.739 UUID: 5f1cc787-a0b4-4c4c-b5f7-9bf44bd2f6b5 00:20:18.739 Thin Provisioning: Not Supported 00:20:18.739 Per-NS Atomic Units: Yes 00:20:18.739 Atomic Boundary Size (Normal): 0 00:20:18.739 Atomic Boundary Size (PFail): 0 00:20:18.739 Atomic Boundary Offset: 0 00:20:18.739 Maximum Single Source Range Length: 65535 00:20:18.739 Maximum Copy Length: 65535 00:20:18.739 Maximum Source Range Count: 1 00:20:18.739 NGUID/EUI64 Never Reused: No 00:20:18.739 Namespace Write Protected: No 00:20:18.739 Number of LBA Formats: 1 00:20:18.739 Current LBA Format: LBA Format #00 00:20:18.739 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:18.739 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:18.739 rmmod nvme_tcp 00:20:18.739 rmmod nvme_fabrics 00:20:18.739 rmmod nvme_keyring 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 837359 ']' 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 837359 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 837359 ']' 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 837359 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 837359 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 837359' 00:20:18.739 killing process with pid 837359 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 837359 00:20:18.739 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 837359 00:20:18.998 16:13:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:18.998 16:13:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:18.998 16:13:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:18.998 16:13:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:18.998 16:13:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:18.998 16:13:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.998 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.998 16:13:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.526 16:13:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:21.526 00:20:21.526 real 0m6.131s 00:20:21.526 user 0m7.147s 00:20:21.526 sys 0m1.956s 00:20:21.526 16:13:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:21.526 16:13:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:21.526 ************************************ 00:20:21.526 END TEST nvmf_identify 00:20:21.526 ************************************ 00:20:21.526 16:13:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:21.526 16:13:07 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:21.526 16:13:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:21.526 16:13:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:21.526 16:13:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:21.526 ************************************ 00:20:21.526 START TEST nvmf_perf 00:20:21.526 ************************************ 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:21.527 * Looking for test storage... 00:20:21.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:21.527 16:13:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:23.425 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:23.425 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:23.425 Found net devices under 0000:09:00.0: cvl_0_0 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:23.425 Found net devices under 0000:09:00.1: cvl_0_1 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:23.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:20:23.425 00:20:23.425 --- 10.0.0.2 ping statistics --- 00:20:23.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.425 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:23.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:20:23.425 00:20:23.425 --- 10.0.0.1 ping statistics --- 00:20:23.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.425 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:23.425 16:13:09 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:23.426 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:23.426 16:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:23.426 16:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:23.426 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=839448 00:20:23.426 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 839448 00:20:23.426 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:23.426 16:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 839448 ']' 00:20:23.426 16:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.426 16:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:23.426 16:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.426 16:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:23.426 16:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:23.426 [2024-07-15 16:13:09.370081] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:20:23.426 [2024-07-15 16:13:09.370158] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.426 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.684 [2024-07-15 16:13:09.435652] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:23.684 [2024-07-15 16:13:09.536103] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.684 [2024-07-15 16:13:09.536161] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.684 [2024-07-15 16:13:09.536188] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.684 [2024-07-15 16:13:09.536199] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.684 [2024-07-15 16:13:09.536210] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.684 [2024-07-15 16:13:09.536290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.684 [2024-07-15 16:13:09.536358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.684 [2024-07-15 16:13:09.536433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:23.684 [2024-07-15 16:13:09.536435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.684 16:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.684 16:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:23.684 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:23.684 16:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:23.684 16:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:23.942 16:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.942 16:13:09 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:23.942 16:13:09 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:27.221 16:13:12 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:27.221 16:13:12 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:27.221 16:13:13 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:20:27.221 16:13:13 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:27.479 16:13:13 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:27.479 16:13:13 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:20:27.479 16:13:13 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:27.479 16:13:13 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:27.479 16:13:13 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:27.735 [2024-07-15 16:13:13.580912] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.735 16:13:13 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:27.992 16:13:13 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:27.992 16:13:13 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:28.248 16:13:14 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:28.248 16:13:14 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:28.505 16:13:14 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:28.766 [2024-07-15 16:13:14.580619] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.766 16:13:14 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:29.047 16:13:14 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:20:29.047 16:13:14 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:20:29.047 16:13:14 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:29.047 16:13:14 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:20:30.418 Initializing NVMe Controllers 00:20:30.418 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:20:30.418 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:20:30.418 Initialization complete. Launching workers. 00:20:30.418 ======================================================== 00:20:30.418 Latency(us) 00:20:30.418 Device Information : IOPS MiB/s Average min max 00:20:30.418 PCIE (0000:0b:00.0) NSID 1 from core 0: 84919.13 331.72 376.35 32.79 6253.17 00:20:30.418 ======================================================== 00:20:30.418 Total : 84919.13 331.72 376.35 32.79 6253.17 00:20:30.418 00:20:30.418 16:13:16 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:30.418 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.789 Initializing NVMe Controllers 00:20:31.789 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:31.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:31.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:31.789 Initialization complete. Launching workers. 00:20:31.789 ======================================================== 00:20:31.789 Latency(us) 00:20:31.789 Device Information : IOPS MiB/s Average min max 00:20:31.789 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 104.00 0.41 9687.94 138.29 45790.24 00:20:31.789 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35.00 0.14 29640.00 7955.59 47895.16 00:20:31.789 ======================================================== 00:20:31.789 Total : 139.00 0.54 14711.84 138.29 47895.16 00:20:31.789 00:20:31.789 16:13:17 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:31.789 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.720 Initializing NVMe Controllers 00:20:32.720 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:32.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:32.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:32.720 Initialization complete. Launching workers. 00:20:32.720 ======================================================== 00:20:32.720 Latency(us) 00:20:32.720 Device Information : IOPS MiB/s Average min max 00:20:32.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8271.06 32.31 3869.46 853.00 7484.78 00:20:32.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3867.53 15.11 8299.74 5313.85 16042.72 00:20:32.720 ======================================================== 00:20:32.720 Total : 12138.59 47.42 5281.02 853.00 16042.72 00:20:32.720 00:20:32.720 16:13:18 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:32.720 16:13:18 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:32.720 16:13:18 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:32.720 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.250 Initializing NVMe Controllers 00:20:35.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:35.250 Controller IO queue size 128, less than required. 00:20:35.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:35.250 Controller IO queue size 128, less than required. 00:20:35.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:35.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:35.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:35.250 Initialization complete. Launching workers. 00:20:35.250 ======================================================== 00:20:35.250 Latency(us) 00:20:35.250 Device Information : IOPS MiB/s Average min max 00:20:35.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1717.92 429.48 75884.52 53132.38 121467.86 00:20:35.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 597.62 149.41 227750.07 97095.94 337836.23 00:20:35.250 ======================================================== 00:20:35.250 Total : 2315.54 578.89 115079.86 53132.38 337836.23 00:20:35.250 00:20:35.250 16:13:21 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:35.250 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.507 No valid NVMe controllers or AIO or URING devices found 00:20:35.507 Initializing NVMe Controllers 00:20:35.507 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:35.507 Controller IO queue size 128, less than required. 00:20:35.507 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:35.507 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:35.507 Controller IO queue size 128, less than required. 00:20:35.507 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:35.507 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:35.507 WARNING: Some requested NVMe devices were skipped 00:20:35.507 16:13:21 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:35.507 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.038 Initializing NVMe Controllers 00:20:38.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:38.038 Controller IO queue size 128, less than required. 00:20:38.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:38.038 Controller IO queue size 128, less than required. 00:20:38.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:38.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:38.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:38.038 Initialization complete. Launching workers. 00:20:38.038 00:20:38.038 ==================== 00:20:38.038 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:38.038 TCP transport: 00:20:38.038 polls: 8852 00:20:38.038 idle_polls: 5691 00:20:38.038 sock_completions: 3161 00:20:38.038 nvme_completions: 6023 00:20:38.038 submitted_requests: 9042 00:20:38.038 queued_requests: 1 00:20:38.038 00:20:38.038 ==================== 00:20:38.038 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:38.038 TCP transport: 00:20:38.038 polls: 12311 00:20:38.038 idle_polls: 8802 00:20:38.038 sock_completions: 3509 00:20:38.038 nvme_completions: 6145 00:20:38.038 submitted_requests: 9078 00:20:38.038 queued_requests: 1 00:20:38.038 ======================================================== 00:20:38.038 Latency(us) 00:20:38.038 Device Information : IOPS MiB/s Average min max 00:20:38.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1504.48 376.12 87282.87 59698.68 145167.50 00:20:38.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1534.95 383.74 84431.62 44215.12 132493.69 00:20:38.038 ======================================================== 00:20:38.038 Total : 3039.43 759.86 85842.95 44215.12 145167.50 00:20:38.038 00:20:38.038 16:13:23 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:38.038 16:13:23 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:38.038 16:13:23 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:38.038 16:13:23 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:38.038 16:13:23 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:38.038 16:13:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:38.038 16:13:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:38.038 16:13:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:38.038 16:13:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:38.038 16:13:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:38.038 16:13:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:38.038 rmmod nvme_tcp 00:20:38.038 rmmod nvme_fabrics 00:20:38.038 rmmod nvme_keyring 00:20:38.038 16:13:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:38.038 16:13:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:38.038 16:13:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:38.038 16:13:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 839448 ']' 00:20:38.038 16:13:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 839448 00:20:38.038 16:13:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 839448 ']' 00:20:38.038 16:13:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 839448 00:20:38.038 16:13:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:20:38.038 16:13:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:38.038 16:13:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 839448 00:20:38.294 16:13:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:38.294 16:13:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:38.294 16:13:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 839448' 00:20:38.294 killing process with pid 839448 00:20:38.294 16:13:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 839448 00:20:38.294 16:13:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 839448 00:20:39.663 16:13:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:39.663 16:13:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:39.663 16:13:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:39.663 16:13:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.663 16:13:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:39.663 16:13:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.663 16:13:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.663 16:13:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.200 16:13:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:42.200 00:20:42.200 real 0m20.627s 00:20:42.200 user 1m3.464s 00:20:42.200 sys 0m5.135s 00:20:42.200 16:13:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:42.200 16:13:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:42.200 ************************************ 00:20:42.200 END TEST nvmf_perf 00:20:42.200 ************************************ 00:20:42.200 16:13:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:42.200 16:13:27 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:42.200 16:13:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:42.200 16:13:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:42.200 16:13:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:42.200 ************************************ 00:20:42.200 START TEST nvmf_fio_host 00:20:42.200 ************************************ 00:20:42.200 16:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:42.200 * Looking for test storage... 00:20:42.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:42.200 16:13:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.200 16:13:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.200 16:13:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.200 16:13:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.200 16:13:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.200 16:13:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.200 16:13:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.200 16:13:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:42.201 16:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:44.111 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:44.111 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.111 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:44.112 Found net devices under 0000:09:00.0: cvl_0_0 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:44.112 Found net devices under 0000:09:00.1: cvl_0_1 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:44.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:20:44.112 00:20:44.112 --- 10.0.0.2 ping statistics --- 00:20:44.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.112 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:20:44.112 00:20:44.112 --- 10.0.0.1 ping statistics --- 00:20:44.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.112 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=843282 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 843282 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 843282 ']' 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.112 16:13:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.112 [2024-07-15 16:13:29.868332] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:20:44.112 [2024-07-15 16:13:29.868408] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.112 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.112 [2024-07-15 16:13:29.931912] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.112 [2024-07-15 16:13:30.037971] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.112 [2024-07-15 16:13:30.038036] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.112 [2024-07-15 16:13:30.038064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.112 [2024-07-15 16:13:30.038077] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.112 [2024-07-15 16:13:30.038086] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.112 [2024-07-15 16:13:30.038155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.112 [2024-07-15 16:13:30.038241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.112 [2024-07-15 16:13:30.038311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.112 [2024-07-15 16:13:30.038315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.370 16:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.370 16:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:20:44.370 16:13:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:44.629 [2024-07-15 16:13:30.397474] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.629 16:13:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:44.629 16:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:44.629 16:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.629 16:13:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:44.887 Malloc1 00:20:44.887 16:13:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:45.145 16:13:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:45.403 16:13:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:45.661 [2024-07-15 16:13:31.468302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.661 16:13:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:45.919 16:13:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:46.177 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:46.177 fio-3.35 00:20:46.177 Starting 1 thread 00:20:46.177 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.730 00:20:48.730 test: (groupid=0, jobs=1): err= 0: pid=843635: Mon Jul 15 16:13:34 2024 00:20:48.730 read: IOPS=9022, BW=35.2MiB/s (37.0MB/s)(70.7MiB/2006msec) 00:20:48.730 slat (usec): min=2, max=156, avg= 2.71, stdev= 1.84 00:20:48.730 clat (usec): min=2527, max=13104, avg=7715.92, stdev=642.18 00:20:48.730 lat (usec): min=2557, max=13107, avg=7718.63, stdev=642.09 00:20:48.730 clat percentiles (usec): 00:20:48.730 | 1.00th=[ 6259], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:20:48.730 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7898], 00:20:48.730 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8717], 00:20:48.730 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11338], 99.95th=[11994], 00:20:48.730 | 99.99th=[12256] 00:20:48.730 bw ( KiB/s): min=34752, max=36968, per=99.90%, avg=36054.00, stdev=936.48, samples=4 00:20:48.730 iops : min= 8688, max= 9242, avg=9013.50, stdev=234.12, samples=4 00:20:48.730 write: IOPS=9041, BW=35.3MiB/s (37.0MB/s)(70.8MiB/2006msec); 0 zone resets 00:20:48.730 slat (usec): min=2, max=127, avg= 2.86, stdev= 1.30 00:20:48.730 clat (usec): min=1389, max=11794, avg=6359.20, stdev=522.91 00:20:48.730 lat (usec): min=1397, max=11797, avg=6362.06, stdev=522.85 00:20:48.730 clat percentiles (usec): 00:20:48.730 | 1.00th=[ 5145], 5.00th=[ 5604], 10.00th=[ 5735], 20.00th=[ 5932], 00:20:48.730 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:20:48.730 | 70.00th=[ 6587], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7111], 00:20:48.730 | 99.00th=[ 7439], 99.50th=[ 7570], 99.90th=[10159], 99.95th=[10814], 00:20:48.730 | 99.99th=[11731] 00:20:48.730 bw ( KiB/s): min=35664, max=36616, per=100.00%, avg=36166.00, stdev=415.45, samples=4 00:20:48.730 iops : min= 8916, max= 9154, avg=9041.50, stdev=103.86, samples=4 00:20:48.730 lat (msec) : 2=0.03%, 4=0.11%, 10=99.70%, 20=0.16% 00:20:48.730 cpu : usr=65.04%, sys=33.12%, ctx=113, majf=0, minf=41 00:20:48.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:48.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:48.730 issued rwts: total=18100,18137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:48.730 00:20:48.730 Run status group 0 (all jobs): 00:20:48.730 READ: bw=35.2MiB/s (37.0MB/s), 35.2MiB/s-35.2MiB/s (37.0MB/s-37.0MB/s), io=70.7MiB (74.1MB), run=2006-2006msec 00:20:48.730 WRITE: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.8MiB (74.3MB), run=2006-2006msec 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:48.730 16:13:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:48.730 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:48.730 fio-3.35 00:20:48.730 Starting 1 thread 00:20:48.730 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.257 00:20:51.257 test: (groupid=0, jobs=1): err= 0: pid=844037: Mon Jul 15 16:13:36 2024 00:20:51.257 read: IOPS=8433, BW=132MiB/s (138MB/s)(265MiB/2009msec) 00:20:51.257 slat (nsec): min=2948, max=95065, avg=3828.86, stdev=1879.32 00:20:51.257 clat (usec): min=2500, max=15287, avg=8790.93, stdev=1944.62 00:20:51.257 lat (usec): min=2504, max=15290, avg=8794.75, stdev=1944.64 00:20:51.257 clat percentiles (usec): 00:20:51.257 | 1.00th=[ 4948], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 7046], 00:20:51.257 | 30.00th=[ 7635], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9503], 00:20:51.257 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11207], 95.00th=[11863], 00:20:51.257 | 99.00th=[13566], 99.50th=[14091], 99.90th=[14746], 99.95th=[14877], 00:20:51.257 | 99.99th=[15139] 00:20:51.257 bw ( KiB/s): min=62176, max=76608, per=51.06%, avg=68888.00, stdev=7793.93, samples=4 00:20:51.257 iops : min= 3886, max= 4788, avg=4305.50, stdev=487.12, samples=4 00:20:51.257 write: IOPS=5056, BW=79.0MiB/s (82.8MB/s)(141MiB/1788msec); 0 zone resets 00:20:51.257 slat (usec): min=30, max=149, avg=34.24, stdev= 5.77 00:20:51.257 clat (usec): min=6774, max=18531, avg=11321.42, stdev=1917.11 00:20:51.257 lat (usec): min=6806, max=18563, avg=11355.66, stdev=1917.06 00:20:51.257 clat percentiles (usec): 00:20:51.257 | 1.00th=[ 7832], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:20:51.257 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10945], 60.00th=[11469], 00:20:51.257 | 70.00th=[12125], 80.00th=[12911], 90.00th=[14222], 95.00th=[14877], 00:20:51.257 | 99.00th=[16188], 99.50th=[16712], 99.90th=[17957], 99.95th=[18220], 00:20:51.257 | 99.99th=[18482] 00:20:51.257 bw ( KiB/s): min=63648, max=79872, per=88.64%, avg=71712.00, stdev=8609.41, samples=4 00:20:51.257 iops : min= 3978, max= 4992, avg=4482.00, stdev=538.09, samples=4 00:20:51.257 lat (msec) : 4=0.10%, 10=55.71%, 20=44.19% 00:20:51.257 cpu : usr=78.49%, sys=20.27%, ctx=42, majf=0, minf=69 00:20:51.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:20:51.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:51.257 issued rwts: total=16942,9041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:51.257 00:20:51.257 Run status group 0 (all jobs): 00:20:51.257 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=265MiB (278MB), run=2009-2009msec 00:20:51.257 WRITE: bw=79.0MiB/s (82.8MB/s), 79.0MiB/s-79.0MiB/s (82.8MB/s-82.8MB/s), io=141MiB (148MB), run=1788-1788msec 00:20:51.257 16:13:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:51.257 16:13:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:51.257 16:13:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:51.257 16:13:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:51.258 rmmod nvme_tcp 00:20:51.258 rmmod nvme_fabrics 00:20:51.258 rmmod nvme_keyring 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 843282 ']' 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 843282 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 843282 ']' 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 843282 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 843282 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 843282' 00:20:51.258 killing process with pid 843282 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 843282 00:20:51.258 16:13:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 843282 00:20:51.516 16:13:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:51.516 16:13:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:51.516 16:13:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:51.516 16:13:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:51.516 16:13:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:51.516 16:13:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.516 16:13:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.516 16:13:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.055 16:13:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:54.055 00:20:54.055 real 0m11.768s 00:20:54.055 user 0m34.917s 00:20:54.055 sys 0m3.724s 00:20:54.055 16:13:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:54.055 16:13:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.055 ************************************ 00:20:54.055 END TEST nvmf_fio_host 00:20:54.055 ************************************ 00:20:54.055 16:13:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:54.055 16:13:39 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:54.055 16:13:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:54.055 16:13:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:54.055 16:13:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:54.055 ************************************ 00:20:54.055 START TEST nvmf_failover 00:20:54.055 ************************************ 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:54.055 * Looking for test storage... 00:20:54.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:54.055 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:20:54.056 16:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:55.958 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:55.958 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:55.958 Found net devices under 0000:09:00.0: cvl_0_0 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:55.958 Found net devices under 0000:09:00.1: cvl_0_1 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:55.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:20:55.958 00:20:55.958 --- 10.0.0.2 ping statistics --- 00:20:55.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.958 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:20:55.958 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:20:55.959 00:20:55.959 --- 10.0.0.1 ping statistics --- 00:20:55.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.959 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=846284 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 846284 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 846284 ']' 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.959 16:13:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:55.959 [2024-07-15 16:13:41.849649] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:20:55.959 [2024-07-15 16:13:41.849736] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.959 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.959 [2024-07-15 16:13:41.915533] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:56.218 [2024-07-15 16:13:42.022148] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.218 [2024-07-15 16:13:42.022196] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.218 [2024-07-15 16:13:42.022225] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.218 [2024-07-15 16:13:42.022236] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.218 [2024-07-15 16:13:42.022245] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.218 [2024-07-15 16:13:42.022335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.218 [2024-07-15 16:13:42.022394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:56.218 [2024-07-15 16:13:42.022397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.155 16:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:57.155 16:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:20:57.155 16:13:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:57.155 16:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:57.155 16:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:57.155 16:13:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.155 16:13:42 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:57.155 [2024-07-15 16:13:43.109484] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.155 16:13:43 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:57.718 Malloc0 00:20:57.718 16:13:43 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:57.718 16:13:43 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:58.284 16:13:43 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:58.284 [2024-07-15 16:13:44.276113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.542 16:13:44 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:58.799 [2024-07-15 16:13:44.568928] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:58.799 16:13:44 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:59.058 [2024-07-15 16:13:44.825804] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:59.058 16:13:44 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=846660 00:20:59.058 16:13:44 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:59.058 16:13:44 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:59.058 16:13:44 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 846660 /var/tmp/bdevperf.sock 00:20:59.058 16:13:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 846660 ']' 00:20:59.058 16:13:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.058 16:13:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.058 16:13:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.058 16:13:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.058 16:13:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:59.315 16:13:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.315 16:13:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:20:59.315 16:13:45 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:59.879 NVMe0n1 00:20:59.879 16:13:45 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:00.138 00:21:00.138 16:13:45 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=846784 00:21:00.138 16:13:45 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:00.138 16:13:45 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:01.075 16:13:46 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:01.333 [2024-07-15 16:13:47.215534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.215990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 [2024-07-15 16:13:47.216350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b070 is same with the state(5) to be set 00:21:01.333 16:13:47 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:04.623 16:13:50 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:04.623 00:21:04.880 16:13:50 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:05.139 [2024-07-15 16:13:50.909360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c640 is same with the state(5) to be set 00:21:05.139 [2024-07-15 16:13:50.909436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c640 is same with the state(5) to be set 00:21:05.139 [2024-07-15 16:13:50.909453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c640 is same with the state(5) to be set 00:21:05.139 [2024-07-15 16:13:50.909465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c640 is same with the state(5) to be set 00:21:05.139 [2024-07-15 16:13:50.909477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c640 is same with the state(5) to be set 00:21:05.139 [2024-07-15 16:13:50.909489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c640 is same with the state(5) to be set 00:21:05.139 [2024-07-15 16:13:50.909501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c640 is same with the state(5) to be set 00:21:05.139 [2024-07-15 16:13:50.909513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c640 is same with the state(5) to be set 00:21:05.139 16:13:50 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:08.455 16:13:53 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:08.455 [2024-07-15 16:13:54.180781] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.455 16:13:54 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:09.391 16:13:55 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:09.651 [2024-07-15 16:13:55.481058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 [2024-07-15 16:13:55.481456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce70 is same with the state(5) to be set 00:21:09.651 16:13:55 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 846784 00:21:16.224 0 00:21:16.224 16:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 846660 00:21:16.224 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 846660 ']' 00:21:16.224 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 846660 00:21:16.224 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:16.224 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.224 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 846660 00:21:16.224 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:16.224 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:16.224 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 846660' 00:21:16.224 killing process with pid 846660 00:21:16.224 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 846660 00:21:16.224 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 846660 00:21:16.224 16:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:16.224 [2024-07-15 16:13:44.889114] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:21:16.224 [2024-07-15 16:13:44.889204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid846660 ] 00:21:16.224 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.224 [2024-07-15 16:13:44.950038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.224 [2024-07-15 16:13:45.061611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.224 Running I/O for 15 seconds... 00:21:16.224 [2024-07-15 16:13:47.217807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.224 [2024-07-15 16:13:47.217847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.224 [2024-07-15 16:13:47.217877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.224 [2024-07-15 16:13:47.217893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.224 [2024-07-15 16:13:47.217909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.224 [2024-07-15 16:13:47.217923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.224 [2024-07-15 16:13:47.217938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.224 [2024-07-15 16:13:47.217953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.224 [2024-07-15 16:13:47.217978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.224 [2024-07-15 16:13:47.217993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.224 [2024-07-15 16:13:47.218008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.224 [2024-07-15 16:13:47.218022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.224 [2024-07-15 16:13:47.218038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.224 [2024-07-15 16:13:47.218052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.224 [2024-07-15 16:13:47.218068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.218972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.218987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.219002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.219016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.219031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.219045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.219068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.219082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.219097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.219111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.219126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.219140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.219156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.219169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.219184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.219198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.219213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.219227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.219242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.219255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.219271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.225 [2024-07-15 16:13:47.219285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.219300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.225 [2024-07-15 16:13:47.219317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.225 [2024-07-15 16:13:47.219333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.226 [2024-07-15 16:13:47.219790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.219970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.219987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.226 [2024-07-15 16:13:47.220588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.226 [2024-07-15 16:13:47.220602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.220617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.220631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.220646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.220660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.220676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.220690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.220705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.220719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.220734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.220748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.220763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.220777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.220792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.220806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.220824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.220839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.220854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.220868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.220883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.220897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.220913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.220927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.220942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.220964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.220986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.221001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.221031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.221061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.221090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.221119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.221148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.221177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.221205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.227 [2024-07-15 16:13:47.221240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.227 [2024-07-15 16:13:47.221286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80504 len:8 PRP1 0x0 PRP2 0x0 00:21:16.227 [2024-07-15 16:13:47.221300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.227 [2024-07-15 16:13:47.221330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.227 [2024-07-15 16:13:47.221341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80512 len:8 PRP1 0x0 PRP2 0x0 00:21:16.227 [2024-07-15 16:13:47.221354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.227 [2024-07-15 16:13:47.221379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.227 [2024-07-15 16:13:47.221390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80520 len:8 PRP1 0x0 PRP2 0x0 00:21:16.227 [2024-07-15 16:13:47.221403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.227 [2024-07-15 16:13:47.221427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.227 [2024-07-15 16:13:47.221438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80528 len:8 PRP1 0x0 PRP2 0x0 00:21:16.227 [2024-07-15 16:13:47.221452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.227 [2024-07-15 16:13:47.221476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.227 [2024-07-15 16:13:47.221487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80536 len:8 PRP1 0x0 PRP2 0x0 00:21:16.227 [2024-07-15 16:13:47.221499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.227 [2024-07-15 16:13:47.221523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.227 [2024-07-15 16:13:47.221535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80544 len:8 PRP1 0x0 PRP2 0x0 00:21:16.227 [2024-07-15 16:13:47.221547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.227 [2024-07-15 16:13:47.221571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.227 [2024-07-15 16:13:47.221582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80552 len:8 PRP1 0x0 PRP2 0x0 00:21:16.227 [2024-07-15 16:13:47.221595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.227 [2024-07-15 16:13:47.221622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.227 [2024-07-15 16:13:47.221634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80560 len:8 PRP1 0x0 PRP2 0x0 00:21:16.227 [2024-07-15 16:13:47.221647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.227 [2024-07-15 16:13:47.221672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.227 [2024-07-15 16:13:47.221684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80568 len:8 PRP1 0x0 PRP2 0x0 00:21:16.227 [2024-07-15 16:13:47.221696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.227 [2024-07-15 16:13:47.221721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.227 [2024-07-15 16:13:47.221732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80576 len:8 PRP1 0x0 PRP2 0x0 00:21:16.227 [2024-07-15 16:13:47.221744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.227 [2024-07-15 16:13:47.221768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.227 [2024-07-15 16:13:47.221779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80584 len:8 PRP1 0x0 PRP2 0x0 00:21:16.227 [2024-07-15 16:13:47.221792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.227 [2024-07-15 16:13:47.221815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.227 [2024-07-15 16:13:47.221827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80592 len:8 PRP1 0x0 PRP2 0x0 00:21:16.227 [2024-07-15 16:13:47.221839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.227 [2024-07-15 16:13:47.221852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.227 [2024-07-15 16:13:47.221863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.227 [2024-07-15 16:13:47.221875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80600 len:8 PRP1 0x0 PRP2 0x0 00:21:16.227 [2024-07-15 16:13:47.221888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:47.221906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.228 [2024-07-15 16:13:47.221918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.228 [2024-07-15 16:13:47.221929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80608 len:8 PRP1 0x0 PRP2 0x0 00:21:16.228 [2024-07-15 16:13:47.221941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:47.221960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.228 [2024-07-15 16:13:47.221973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.228 [2024-07-15 16:13:47.221984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80616 len:8 PRP1 0x0 PRP2 0x0 00:21:16.228 [2024-07-15 16:13:47.221997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:47.222056] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf24390 was disconnected and freed. reset controller. 00:21:16.228 [2024-07-15 16:13:47.222074] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:16.228 [2024-07-15 16:13:47.222108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.228 [2024-07-15 16:13:47.222125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:47.222140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.228 [2024-07-15 16:13:47.222153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:47.222167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.228 [2024-07-15 16:13:47.222180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:47.222193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.228 [2024-07-15 16:13:47.222206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:47.222219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:16.228 [2024-07-15 16:13:47.222279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefe0f0 (9): Bad file descriptor 00:21:16.228 [2024-07-15 16:13:47.225558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:16.228 [2024-07-15 16:13:47.391199] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:16.228 [2024-07-15 16:13:50.909851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.909894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.909923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.909939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.909962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.909979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.909995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:117192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.228 [2024-07-15 16:13:50.910612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.228 [2024-07-15 16:13:50.910625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.910640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.229 [2024-07-15 16:13:50.910654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.910669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.229 [2024-07-15 16:13:50.910682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.910697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.229 [2024-07-15 16:13:50.910710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.910725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.229 [2024-07-15 16:13:50.910739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.910754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.229 [2024-07-15 16:13:50.910767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.910782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.229 [2024-07-15 16:13:50.910796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.910814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.229 [2024-07-15 16:13:50.910828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.910843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.229 [2024-07-15 16:13:50.910857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.910872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.229 [2024-07-15 16:13:50.910887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.910902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.229 [2024-07-15 16:13:50.910915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.910930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.229 [2024-07-15 16:13:50.910944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.910966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.229 [2024-07-15 16:13:50.910982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.910997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.229 [2024-07-15 16:13:50.911011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.229 [2024-07-15 16:13:50.911040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.229 [2024-07-15 16:13:50.911884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.229 [2024-07-15 16:13:50.911898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.911914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.911927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.911946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.911969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.911986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.912949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.230 [2024-07-15 16:13:50.912971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.913006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.230 [2024-07-15 16:13:50.913024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117904 len:8 PRP1 0x0 PRP2 0x0 00:21:16.230 [2024-07-15 16:13:50.913038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.913056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.230 [2024-07-15 16:13:50.913068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.230 [2024-07-15 16:13:50.913079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117912 len:8 PRP1 0x0 PRP2 0x0 00:21:16.230 [2024-07-15 16:13:50.913096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.913110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.230 [2024-07-15 16:13:50.913121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.230 [2024-07-15 16:13:50.913133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117920 len:8 PRP1 0x0 PRP2 0x0 00:21:16.230 [2024-07-15 16:13:50.913146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.913159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.230 [2024-07-15 16:13:50.913170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.230 [2024-07-15 16:13:50.913181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117928 len:8 PRP1 0x0 PRP2 0x0 00:21:16.230 [2024-07-15 16:13:50.913194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.230 [2024-07-15 16:13:50.913207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.913218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.913229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117936 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.913241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.913261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.913274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.913285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117944 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.913298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.913311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.913322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.913334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117952 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.913347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.913360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.913371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.913383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117960 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.913395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.913408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.913419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.913431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117968 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.913443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.913457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.913468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.913482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117976 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.913496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.913509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.913520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.913532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117984 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.913545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.913558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.913568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.913579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117992 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.913592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.913605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.913616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.913627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118000 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.913640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.913653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.913664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.913675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118008 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.913688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.913701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.913712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.913723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118016 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.913735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.913748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.913759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.913770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118024 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.913783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.913795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.913806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.913817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118032 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.913829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.913842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.913856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.913867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118040 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.913880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.913892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.913903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.913914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118048 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.913926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.913939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.913949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.913969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118056 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.913982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.913995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.914006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.914017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118064 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.914030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.914044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.914055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.914066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118072 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.914078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.914091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.914102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.914113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118080 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.914126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.914138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.231 [2024-07-15 16:13:50.914149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.231 [2024-07-15 16:13:50.914161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117376 len:8 PRP1 0x0 PRP2 0x0 00:21:16.231 [2024-07-15 16:13:50.914173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.914247] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10c8d80 was disconnected and freed. reset controller. 00:21:16.231 [2024-07-15 16:13:50.914266] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:16.231 [2024-07-15 16:13:50.914306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.231 [2024-07-15 16:13:50.914329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.914345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.231 [2024-07-15 16:13:50.914359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.914373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.231 [2024-07-15 16:13:50.914386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.914400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.231 [2024-07-15 16:13:50.914413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:50.914426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:16.231 [2024-07-15 16:13:50.914481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefe0f0 (9): Bad file descriptor 00:21:16.231 [2024-07-15 16:13:50.917698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:16.231 [2024-07-15 16:13:51.027917] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:16.231 [2024-07-15 16:13:55.482303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.231 [2024-07-15 16:13:55.482346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:55.482378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.231 [2024-07-15 16:13:55.482409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.231 [2024-07-15 16:13:55.482427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.482964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.482985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.483015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.483043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.483073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.483102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.483131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.483160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.483188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.483218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.483247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.483276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.483305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.483334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.483372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.232 [2024-07-15 16:13:55.483401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.232 [2024-07-15 16:13:55.483430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.232 [2024-07-15 16:13:55.483459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.232 [2024-07-15 16:13:55.483488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.232 [2024-07-15 16:13:55.483517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.232 [2024-07-15 16:13:55.483547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.232 [2024-07-15 16:13:55.483577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.483606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.483636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-07-15 16:13:55.483665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.232 [2024-07-15 16:13:55.483680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.483694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.483709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.483727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.483743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.483758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.483773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.483787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.483802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.483816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.483832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.483846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.483861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.483875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.483890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.483904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.483919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.483933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.483948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.483969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.483986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.233 [2024-07-15 16:13:55.484773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.233 [2024-07-15 16:13:55.484789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-07-15 16:13:55.484802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.484818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.484832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.484848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.484866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.484882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.484896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.484912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.484926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.484941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.484960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.484978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.484993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.234 [2024-07-15 16:13:55.485172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.234 [2024-07-15 16:13:55.485202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.234 [2024-07-15 16:13:55.485231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.234 [2024-07-15 16:13:55.485265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.234 [2024-07-15 16:13:55.485295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.234 [2024-07-15 16:13:55.485325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.234 [2024-07-15 16:13:55.485354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.234 [2024-07-15 16:13:55.485911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.234 [2024-07-15 16:13:55.485940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.234 [2024-07-15 16:13:55.485976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.485991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.234 [2024-07-15 16:13:55.486009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.486026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.234 [2024-07-15 16:13:55.486040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.486055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.234 [2024-07-15 16:13:55.486069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.234 [2024-07-15 16:13:55.486084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.234 [2024-07-15 16:13:55.486098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.235 [2024-07-15 16:13:55.486114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.235 [2024-07-15 16:13:55.486128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.235 [2024-07-15 16:13:55.486143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.235 [2024-07-15 16:13:55.486157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.235 [2024-07-15 16:13:55.486186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.235 [2024-07-15 16:13:55.486201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.235 [2024-07-15 16:13:55.486213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56136 len:8 PRP1 0x0 PRP2 0x0 00:21:16.235 [2024-07-15 16:13:55.486226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.235 [2024-07-15 16:13:55.486291] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10c8b70 was disconnected and freed. reset controller. 00:21:16.235 [2024-07-15 16:13:55.486310] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:16.235 [2024-07-15 16:13:55.486344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.235 [2024-07-15 16:13:55.486362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.235 [2024-07-15 16:13:55.486377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.235 [2024-07-15 16:13:55.486391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.235 [2024-07-15 16:13:55.486405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.235 [2024-07-15 16:13:55.486419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.235 [2024-07-15 16:13:55.486433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.235 [2024-07-15 16:13:55.486446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.235 [2024-07-15 16:13:55.486459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:16.235 [2024-07-15 16:13:55.486505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefe0f0 (9): Bad file descriptor 00:21:16.235 [2024-07-15 16:13:55.489728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:16.235 [2024-07-15 16:13:55.574421] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:16.235 00:21:16.235 Latency(us) 00:21:16.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.235 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:16.235 Verification LBA range: start 0x0 length 0x4000 00:21:16.235 NVMe0n1 : 15.01 8457.44 33.04 933.73 0.00 13602.24 534.00 45632.47 00:21:16.235 =================================================================================================================== 00:21:16.235 Total : 8457.44 33.04 933.73 0.00 13602.24 534.00 45632.47 00:21:16.235 Received shutdown signal, test time was about 15.000000 seconds 00:21:16.235 00:21:16.235 Latency(us) 00:21:16.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.235 =================================================================================================================== 00:21:16.235 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.235 16:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:16.235 16:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:16.235 16:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:16.235 16:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=848561 00:21:16.235 16:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:16.235 16:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 848561 /var/tmp/bdevperf.sock 00:21:16.235 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 848561 ']' 00:21:16.235 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.235 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.235 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.235 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.235 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:16.235 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.235 16:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:16.235 16:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:16.235 [2024-07-15 16:14:01.959382] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:16.235 16:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:16.235 [2024-07-15 16:14:02.204089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:16.494 16:14:02 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:16.752 NVMe0n1 00:21:16.752 16:14:02 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:17.010 00:21:17.267 16:14:03 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:17.524 00:21:17.524 16:14:03 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:17.524 16:14:03 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:17.780 16:14:03 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:18.038 16:14:03 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:21.316 16:14:06 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:21.316 16:14:06 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:21.316 16:14:07 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=849231 00:21:21.316 16:14:07 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:21.316 16:14:07 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 849231 00:21:22.693 0 00:21:22.693 16:14:08 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:22.693 [2024-07-15 16:14:01.451473] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:21:22.693 [2024-07-15 16:14:01.451552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid848561 ] 00:21:22.693 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.693 [2024-07-15 16:14:01.512308] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.693 [2024-07-15 16:14:01.618978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.693 [2024-07-15 16:14:03.918944] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:22.693 [2024-07-15 16:14:03.919053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.693 [2024-07-15 16:14:03.919076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.693 [2024-07-15 16:14:03.919093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.693 [2024-07-15 16:14:03.919106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.693 [2024-07-15 16:14:03.919120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.693 [2024-07-15 16:14:03.919133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.693 [2024-07-15 16:14:03.919176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.693 [2024-07-15 16:14:03.919192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.693 [2024-07-15 16:14:03.919206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.693 [2024-07-15 16:14:03.919247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.693 [2024-07-15 16:14:03.919277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18580f0 (9): Bad file descriptor 00:21:22.693 [2024-07-15 16:14:03.922129] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:22.693 Running I/O for 1 seconds... 00:21:22.693 00:21:22.693 Latency(us) 00:21:22.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.693 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:22.693 Verification LBA range: start 0x0 length 0x4000 00:21:22.693 NVMe0n1 : 1.01 8763.06 34.23 0.00 0.00 14542.73 3034.07 12524.66 00:21:22.693 =================================================================================================================== 00:21:22.693 Total : 8763.06 34.23 0.00 0.00 14542.73 3034.07 12524.66 00:21:22.693 16:14:08 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:22.693 16:14:08 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:22.693 16:14:08 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:22.952 16:14:08 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:22.952 16:14:08 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:23.209 16:14:09 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:23.468 16:14:09 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:26.759 16:14:12 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:26.759 16:14:12 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:26.759 16:14:12 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 848561 00:21:26.759 16:14:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 848561 ']' 00:21:26.759 16:14:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 848561 00:21:26.759 16:14:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:26.759 16:14:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:26.759 16:14:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 848561 00:21:26.759 16:14:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:26.759 16:14:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:26.759 16:14:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 848561' 00:21:26.759 killing process with pid 848561 00:21:26.759 16:14:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 848561 00:21:26.759 16:14:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 848561 00:21:27.017 16:14:12 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:27.017 16:14:12 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:27.275 16:14:13 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:27.276 rmmod nvme_tcp 00:21:27.276 rmmod nvme_fabrics 00:21:27.276 rmmod nvme_keyring 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 846284 ']' 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 846284 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 846284 ']' 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 846284 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 846284 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 846284' 00:21:27.276 killing process with pid 846284 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 846284 00:21:27.276 16:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 846284 00:21:27.534 16:14:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:27.534 16:14:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:27.534 16:14:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:27.534 16:14:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:27.534 16:14:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:27.534 16:14:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.534 16:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.534 16:14:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.072 16:14:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:30.072 00:21:30.072 real 0m35.993s 00:21:30.072 user 2m6.268s 00:21:30.072 sys 0m6.031s 00:21:30.072 16:14:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:30.072 16:14:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:30.072 ************************************ 00:21:30.072 END TEST nvmf_failover 00:21:30.072 ************************************ 00:21:30.072 16:14:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:30.072 16:14:15 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:30.072 16:14:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:30.072 16:14:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:30.072 16:14:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:30.072 ************************************ 00:21:30.072 START TEST nvmf_host_discovery 00:21:30.072 ************************************ 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:30.072 * Looking for test storage... 00:21:30.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:30.072 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:30.073 16:14:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:32.015 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:32.015 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:32.015 Found net devices under 0000:09:00.0: cvl_0_0 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:32.015 Found net devices under 0000:09:00.1: cvl_0_1 00:21:32.015 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:32.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:21:32.016 00:21:32.016 --- 10.0.0.2 ping statistics --- 00:21:32.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.016 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:32.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:21:32.016 00:21:32.016 --- 10.0.0.1 ping statistics --- 00:21:32.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.016 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=851953 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 851953 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 851953 ']' 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:32.016 16:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.016 [2024-07-15 16:14:17.774412] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:21:32.016 [2024-07-15 16:14:17.774486] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.016 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.016 [2024-07-15 16:14:17.836746] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.016 [2024-07-15 16:14:17.941953] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.016 [2024-07-15 16:14:17.942016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.016 [2024-07-15 16:14:17.942045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.016 [2024-07-15 16:14:17.942056] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.016 [2024-07-15 16:14:17.942066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.016 [2024-07-15 16:14:17.942101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.275 [2024-07-15 16:14:18.084398] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.275 [2024-07-15 16:14:18.092593] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.275 null0 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.275 null1 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=851979 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 851979 /tmp/host.sock 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 851979 ']' 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:32.275 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:32.275 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.275 [2024-07-15 16:14:18.162512] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:21:32.276 [2024-07-15 16:14:18.162593] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid851979 ] 00:21:32.276 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.276 [2024-07-15 16:14:18.219659] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.534 [2024-07-15 16:14:18.324972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:32.534 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.535 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:32.535 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.535 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:32.535 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:32.535 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:32.535 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:32.535 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.535 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:32.535 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.535 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:32.535 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.535 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:32.535 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:32.535 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.535 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.793 [2024-07-15 16:14:18.722233] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:32.793 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:32.794 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:32.794 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.794 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.794 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:32.794 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:32.794 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.794 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:32.794 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:32.794 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:32.794 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.794 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:32.794 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.794 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:32.794 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:32.794 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:21:33.052 16:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:33.619 [2024-07-15 16:14:19.491556] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:33.619 [2024-07-15 16:14:19.491579] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:33.619 [2024-07-15 16:14:19.491600] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:33.619 [2024-07-15 16:14:19.620047] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:33.878 [2024-07-15 16:14:19.845822] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:33.878 [2024-07-15 16:14:19.845844] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:34.137 16:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:34.138 16:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.138 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.398 [2024-07-15 16:14:20.178533] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:34.398 [2024-07-15 16:14:20.179576] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:34.398 [2024-07-15 16:14:20.179624] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:34.398 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:34.399 [2024-07-15 16:14:20.308049] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:34.399 16:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:34.399 [2024-07-15 16:14:20.366511] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:34.399 [2024-07-15 16:14:20.366548] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:34.399 [2024-07-15 16:14:20.366557] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:35.336 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:35.336 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:35.336 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:35.336 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:35.336 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:35.336 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.336 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.336 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:35.336 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:35.336 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.595 [2024-07-15 16:14:21.398800] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:35.595 [2024-07-15 16:14:21.398842] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:35.595 [2024-07-15 16:14:21.407093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.595 [2024-07-15 16:14:21.407128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.595 [2024-07-15 16:14:21.407161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.595 [2024-07-15 16:14:21.407177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.595 [2024-07-15 16:14:21.407192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.595 [2024-07-15 16:14:21.407206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.595 [2024-07-15 16:14:21.407220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.595 [2024-07-15 16:14:21.407234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.595 [2024-07-15 16:14:21.407248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166ec00 is same with the state(5) to be set 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.595 [2024-07-15 16:14:21.417085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x166ec00 (9): Bad file descriptor 00:21:35.595 [2024-07-15 16:14:21.427125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:35.595 [2024-07-15 16:14:21.427427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.595 [2024-07-15 16:14:21.427456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x166ec00 with addr=10.0.0.2, port=4420 00:21:35.595 [2024-07-15 16:14:21.427472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166ec00 is same with the state(5) to be set 00:21:35.595 [2024-07-15 16:14:21.427495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x166ec00 (9): Bad file descriptor 00:21:35.595 [2024-07-15 16:14:21.427515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:35.595 [2024-07-15 16:14:21.427528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:35.595 [2024-07-15 16:14:21.427543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:35.595 [2024-07-15 16:14:21.427563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.595 [2024-07-15 16:14:21.437216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:35.595 [2024-07-15 16:14:21.437458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.595 [2024-07-15 16:14:21.437485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x166ec00 with addr=10.0.0.2, port=4420 00:21:35.595 [2024-07-15 16:14:21.437501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166ec00 is same with the state(5) to be set 00:21:35.595 [2024-07-15 16:14:21.437523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x166ec00 (9): Bad file descriptor 00:21:35.595 [2024-07-15 16:14:21.437543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:35.595 [2024-07-15 16:14:21.437557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:35.595 [2024-07-15 16:14:21.437570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:35.595 [2024-07-15 16:14:21.437588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:35.595 [2024-07-15 16:14:21.447303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.595 [2024-07-15 16:14:21.448421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.595 [2024-07-15 16:14:21.448469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x166ec00 with addr=10.0.0.2, port=4420 00:21:35.595 [2024-07-15 16:14:21.448486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166ec00 is same with the state(5) to be set 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:35.595 [2024-07-15 16:14:21.448509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x166ec00 (9): Bad file descriptor 00:21:35.595 [2024-07-15 16:14:21.448543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:35.595 [2024-07-15 16:14:21.448561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:35.595 [2024-07-15 16:14:21.448573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:35.595 [2024-07-15 16:14:21.448592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.595 [2024-07-15 16:14:21.457374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:35.595 [2024-07-15 16:14:21.457588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.595 [2024-07-15 16:14:21.457616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x166ec00 with addr=10.0.0.2, port=4420 00:21:35.595 [2024-07-15 16:14:21.457632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166ec00 is same with the state(5) to be set 00:21:35.595 [2024-07-15 16:14:21.457655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x166ec00 (9): Bad file descriptor 00:21:35.595 [2024-07-15 16:14:21.457701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:35.595 [2024-07-15 16:14:21.457720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:35.595 [2024-07-15 16:14:21.457733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:35.595 [2024-07-15 16:14:21.457752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.595 [2024-07-15 16:14:21.467458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:35.595 [2024-07-15 16:14:21.467649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.595 [2024-07-15 16:14:21.467676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x166ec00 with addr=10.0.0.2, port=4420 00:21:35.595 [2024-07-15 16:14:21.467692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166ec00 is same with the state(5) to be set 00:21:35.595 [2024-07-15 16:14:21.467713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x166ec00 (9): Bad file descriptor 00:21:35.595 [2024-07-15 16:14:21.467745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:35.595 [2024-07-15 16:14:21.467763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:35.595 [2024-07-15 16:14:21.467776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:35.595 [2024-07-15 16:14:21.467795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.595 [2024-07-15 16:14:21.477542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:35.595 [2024-07-15 16:14:21.477716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.595 [2024-07-15 16:14:21.477743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x166ec00 with addr=10.0.0.2, port=4420 00:21:35.595 [2024-07-15 16:14:21.477759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166ec00 is same with the state(5) to be set 00:21:35.595 [2024-07-15 16:14:21.477786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x166ec00 (9): Bad file descriptor 00:21:35.595 [2024-07-15 16:14:21.477831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:35.595 [2024-07-15 16:14:21.477850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:35.595 [2024-07-15 16:14:21.477863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:35.595 [2024-07-15 16:14:21.477882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:35.595 [2024-07-15 16:14:21.487623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:35.595 [2024-07-15 16:14:21.487799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.595 [2024-07-15 16:14:21.487826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x166ec00 with addr=10.0.0.2, port=4420 00:21:35.595 [2024-07-15 16:14:21.487842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166ec00 is same with the state(5) to be set 00:21:35.595 [2024-07-15 16:14:21.487863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x166ec00 (9): Bad file descriptor 00:21:35.595 [2024-07-15 16:14:21.487917] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:35.595 [2024-07-15 16:14:21.487946] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:35.595 [2024-07-15 16:14:21.488003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:35.595 [2024-07-15 16:14:21.488024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:35.595 [2024-07-15 16:14:21.488039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:35.595 [2024-07-15 16:14:21.488061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:35.595 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.596 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.596 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.596 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.596 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:35.596 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.855 16:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.791 [2024-07-15 16:14:22.729479] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:36.791 [2024-07-15 16:14:22.729500] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:36.791 [2024-07-15 16:14:22.729520] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:37.049 [2024-07-15 16:14:22.815797] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:37.049 [2024-07-15 16:14:22.876597] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:37.050 [2024-07-15 16:14:22.876627] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.050 request: 00:21:37.050 { 00:21:37.050 "name": "nvme", 00:21:37.050 "trtype": "tcp", 00:21:37.050 "traddr": "10.0.0.2", 00:21:37.050 "adrfam": "ipv4", 00:21:37.050 "trsvcid": "8009", 00:21:37.050 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:37.050 "wait_for_attach": true, 00:21:37.050 "method": "bdev_nvme_start_discovery", 00:21:37.050 "req_id": 1 00:21:37.050 } 00:21:37.050 Got JSON-RPC error response 00:21:37.050 response: 00:21:37.050 { 00:21:37.050 "code": -17, 00:21:37.050 "message": "File exists" 00:21:37.050 } 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.050 request: 00:21:37.050 { 00:21:37.050 "name": "nvme_second", 00:21:37.050 "trtype": "tcp", 00:21:37.050 "traddr": "10.0.0.2", 00:21:37.050 "adrfam": "ipv4", 00:21:37.050 "trsvcid": "8009", 00:21:37.050 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:37.050 "wait_for_attach": true, 00:21:37.050 "method": "bdev_nvme_start_discovery", 00:21:37.050 "req_id": 1 00:21:37.050 } 00:21:37.050 Got JSON-RPC error response 00:21:37.050 response: 00:21:37.050 { 00:21:37.050 "code": -17, 00:21:37.050 "message": "File exists" 00:21:37.050 } 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:37.050 16:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.050 16:14:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:37.050 16:14:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:37.050 16:14:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.050 16:14:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:37.050 16:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.050 16:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.050 16:14:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:37.050 16:14:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:37.050 16:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.309 16:14:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:37.309 16:14:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:37.309 16:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:37.309 16:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:37.309 16:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:37.309 16:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.309 16:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:37.309 16:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.309 16:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:37.309 16:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.309 16:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.245 [2024-07-15 16:14:24.072140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.245 [2024-07-15 16:14:24.072183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1689c90 with addr=10.0.0.2, port=8010 00:21:38.245 [2024-07-15 16:14:24.072206] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:38.245 [2024-07-15 16:14:24.072220] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:38.245 [2024-07-15 16:14:24.072233] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:39.184 [2024-07-15 16:14:25.074709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.184 [2024-07-15 16:14:25.074789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1689c90 with addr=10.0.0.2, port=8010 00:21:39.184 [2024-07-15 16:14:25.074819] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:39.184 [2024-07-15 16:14:25.074833] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:39.184 [2024-07-15 16:14:25.074846] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:40.121 [2024-07-15 16:14:26.076834] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:40.121 request: 00:21:40.121 { 00:21:40.121 "name": "nvme_second", 00:21:40.121 "trtype": "tcp", 00:21:40.121 "traddr": "10.0.0.2", 00:21:40.121 "adrfam": "ipv4", 00:21:40.121 "trsvcid": "8010", 00:21:40.121 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:40.121 "wait_for_attach": false, 00:21:40.121 "attach_timeout_ms": 3000, 00:21:40.121 "method": "bdev_nvme_start_discovery", 00:21:40.121 "req_id": 1 00:21:40.121 } 00:21:40.121 Got JSON-RPC error response 00:21:40.121 response: 00:21:40.121 { 00:21:40.121 "code": -110, 00:21:40.121 "message": "Connection timed out" 00:21:40.121 } 00:21:40.121 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:40.121 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:40.121 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:40.121 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:40.121 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:40.121 16:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:40.121 16:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:40.121 16:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:40.121 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.121 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.121 16:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:40.121 16:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:40.121 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.380 16:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 851979 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.381 rmmod nvme_tcp 00:21:40.381 rmmod nvme_fabrics 00:21:40.381 rmmod nvme_keyring 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 851953 ']' 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 851953 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 851953 ']' 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 851953 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 851953 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 851953' 00:21:40.381 killing process with pid 851953 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 851953 00:21:40.381 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 851953 00:21:40.639 16:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:40.639 16:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:40.639 16:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:40.639 16:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:40.639 16:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:40.639 16:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.639 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.639 16:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:43.171 00:21:43.171 real 0m12.964s 00:21:43.171 user 0m18.825s 00:21:43.171 sys 0m2.686s 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.171 ************************************ 00:21:43.171 END TEST nvmf_host_discovery 00:21:43.171 ************************************ 00:21:43.171 16:14:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:43.171 16:14:28 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:43.171 16:14:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:43.171 16:14:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:43.171 16:14:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:43.171 ************************************ 00:21:43.171 START TEST nvmf_host_multipath_status 00:21:43.171 ************************************ 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:43.171 * Looking for test storage... 00:21:43.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:43.171 16:14:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.548 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:44.548 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:44.807 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:44.807 Found net devices under 0000:09:00.0: cvl_0_0 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:44.807 Found net devices under 0000:09:00.1: cvl_0_1 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:44.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:21:44.807 00:21:44.807 --- 10.0.0.2 ping statistics --- 00:21:44.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.807 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:21:44.807 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:21:44.807 00:21:44.807 --- 10.0.0.1 ping statistics --- 00:21:44.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.807 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=855005 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 855005 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 855005 ']' 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.808 16:14:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:44.808 [2024-07-15 16:14:30.760148] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:21:44.808 [2024-07-15 16:14:30.760241] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.808 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.065 [2024-07-15 16:14:30.823280] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:45.065 [2024-07-15 16:14:30.928449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.065 [2024-07-15 16:14:30.928510] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.065 [2024-07-15 16:14:30.928541] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.065 [2024-07-15 16:14:30.928553] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.065 [2024-07-15 16:14:30.928562] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.065 [2024-07-15 16:14:30.928624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.065 [2024-07-15 16:14:30.928629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.065 16:14:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.065 16:14:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:21:45.065 16:14:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:45.065 16:14:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:45.065 16:14:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:45.065 16:14:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.065 16:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=855005 00:21:45.322 16:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:45.580 [2024-07-15 16:14:31.342282] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.580 16:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:45.839 Malloc0 00:21:45.839 16:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:46.097 16:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:46.354 16:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.613 [2024-07-15 16:14:32.368822] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.613 16:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:46.613 [2024-07-15 16:14:32.613432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:46.894 16:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=855286 00:21:46.894 16:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:46.894 16:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:46.894 16:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 855286 /var/tmp/bdevperf.sock 00:21:46.894 16:14:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 855286 ']' 00:21:46.894 16:14:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.894 16:14:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.894 16:14:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.894 16:14:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.894 16:14:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:47.155 16:14:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.155 16:14:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:21:47.155 16:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:47.438 16:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:48.004 Nvme0n1 00:21:48.004 16:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:48.262 Nvme0n1 00:21:48.262 16:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:48.262 16:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:50.163 16:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:50.163 16:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:50.421 16:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:50.988 16:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:51.920 16:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:51.921 16:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:51.921 16:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.921 16:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:52.177 16:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.177 16:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:52.177 16:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.177 16:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:52.433 16:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:52.433 16:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:52.433 16:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.433 16:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:52.689 16:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.689 16:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:52.689 16:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.689 16:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:52.946 16:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.946 16:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:52.946 16:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.946 16:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:53.203 16:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:53.203 16:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:53.203 16:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:53.203 16:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:53.461 16:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:53.461 16:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:53.461 16:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:53.719 16:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:53.977 16:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:54.911 16:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:54.911 16:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:54.911 16:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.911 16:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:55.168 16:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:55.168 16:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:55.168 16:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.168 16:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:55.426 16:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.426 16:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:55.426 16:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.426 16:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:55.683 16:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.683 16:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:55.684 16:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.684 16:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:55.941 16:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.941 16:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:55.941 16:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.941 16:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:56.198 16:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.198 16:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:56.198 16:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.198 16:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:56.455 16:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.455 16:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:56.455 16:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:56.713 16:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:56.970 16:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:57.902 16:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:57.902 16:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:57.902 16:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.902 16:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:58.159 16:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.159 16:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:58.159 16:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.159 16:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:58.417 16:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:58.417 16:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:58.417 16:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.417 16:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:58.674 16:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.674 16:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:58.674 16:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.674 16:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:58.931 16:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.931 16:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:58.931 16:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.931 16:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:59.188 16:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.188 16:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:59.188 16:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.188 16:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:59.444 16:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.444 16:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:59.444 16:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:59.758 16:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:00.017 16:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:00.949 16:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:00.949 16:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:00.949 16:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.949 16:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:01.207 16:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.207 16:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:01.207 16:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.207 16:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:01.464 16:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:01.464 16:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:01.464 16:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.464 16:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:01.721 16:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.721 16:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:01.721 16:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.721 16:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:01.980 16:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.980 16:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:01.980 16:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.980 16:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:02.308 16:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.308 16:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:02.308 16:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.308 16:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:02.566 16:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:02.566 16:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:02.566 16:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:02.566 16:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:02.824 16:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:04.196 16:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:04.196 16:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:04.196 16:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.196 16:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:04.196 16:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:04.196 16:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:04.196 16:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.196 16:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:04.454 16:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:04.454 16:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:04.454 16:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.454 16:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:04.712 16:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.712 16:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:04.712 16:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.712 16:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:04.969 16:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.969 16:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:04.969 16:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.969 16:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:05.226 16:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:05.226 16:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:05.226 16:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.226 16:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:05.484 16:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:05.485 16:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:05.485 16:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:05.742 16:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:06.001 16:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:06.934 16:14:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:06.934 16:14:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:06.934 16:14:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.934 16:14:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:07.192 16:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:07.192 16:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:07.192 16:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.192 16:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:07.449 16:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.449 16:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:07.449 16:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.449 16:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:07.707 16:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.707 16:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:07.707 16:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.707 16:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:07.964 16:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.964 16:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:07.964 16:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.964 16:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:08.222 16:14:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:08.222 16:14:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:08.222 16:14:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.222 16:14:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:08.480 16:14:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.480 16:14:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:08.738 16:14:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:08.738 16:14:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:08.996 16:14:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:09.255 16:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:10.186 16:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:10.186 16:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:10.186 16:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.186 16:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:10.444 16:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.444 16:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:10.444 16:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.444 16:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:10.701 16:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.701 16:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:10.701 16:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.701 16:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:10.959 16:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.959 16:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:10.959 16:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.959 16:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:11.217 16:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.217 16:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:11.217 16:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.217 16:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:11.475 16:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.475 16:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:11.475 16:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.475 16:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:11.733 16:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.733 16:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:11.733 16:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:11.991 16:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:12.249 16:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:13.183 16:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:13.183 16:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:13.184 16:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.184 16:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:13.442 16:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:13.442 16:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:13.442 16:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.442 16:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:13.700 16:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.700 16:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:13.700 16:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.700 16:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:13.958 16:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.958 16:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:13.958 16:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.958 16:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:14.215 16:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.215 16:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:14.215 16:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.215 16:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:14.473 16:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.473 16:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:14.473 16:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.473 16:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:14.730 16:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.730 16:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:14.730 16:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:14.987 16:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:15.244 16:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:16.177 16:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:16.177 16:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:16.177 16:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.177 16:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:16.435 16:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.435 16:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:16.435 16:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.435 16:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:16.754 16:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.754 16:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:16.754 16:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.754 16:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:17.012 16:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.012 16:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:17.012 16:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.012 16:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:17.269 16:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.269 16:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:17.269 16:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.269 16:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:17.527 16:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.527 16:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:17.527 16:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.527 16:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:17.784 16:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.784 16:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:17.784 16:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:18.041 16:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:18.298 16:15:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:19.229 16:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:19.229 16:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:19.229 16:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.229 16:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:19.487 16:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.487 16:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:19.487 16:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.487 16:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:19.745 16:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:19.745 16:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:19.745 16:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.745 16:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:20.004 16:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.004 16:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:20.004 16:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.004 16:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:20.261 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.261 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:20.261 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.261 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:20.519 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.519 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:20.519 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.519 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:20.776 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:20.776 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 855286 00:22:20.776 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 855286 ']' 00:22:20.776 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 855286 00:22:20.776 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:20.776 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:20.776 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 855286 00:22:20.776 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:20.776 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:20.776 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 855286' 00:22:20.776 killing process with pid 855286 00:22:20.776 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 855286 00:22:20.776 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 855286 00:22:21.039 Connection closed with partial response: 00:22:21.039 00:22:21.039 00:22:21.039 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 855286 00:22:21.039 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:21.039 [2024-07-15 16:14:32.677725] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:22:21.039 [2024-07-15 16:14:32.677813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855286 ] 00:22:21.039 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.039 [2024-07-15 16:14:32.738572] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.039 [2024-07-15 16:14:32.851655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.039 Running I/O for 90 seconds... 00:22:21.039 [2024-07-15 16:14:48.533221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.039 [2024-07-15 16:14:48.533306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:21.039 [2024-07-15 16:14:48.533405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.039 [2024-07-15 16:14:48.533431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:21.039 [2024-07-15 16:14:48.533462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.039 [2024-07-15 16:14:48.533485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:21.039 [2024-07-15 16:14:48.533515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.039 [2024-07-15 16:14:48.533536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:21.039 [2024-07-15 16:14:48.533565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.039 [2024-07-15 16:14:48.533587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:21.039 [2024-07-15 16:14:48.533616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.039 [2024-07-15 16:14:48.533638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:21.039 [2024-07-15 16:14:48.533666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.039 [2024-07-15 16:14:48.533689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:21.039 [2024-07-15 16:14:48.533719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.039 [2024-07-15 16:14:48.533739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:21.039 [2024-07-15 16:14:48.533783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.039 [2024-07-15 16:14:48.533805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:21.039 [2024-07-15 16:14:48.533835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.039 [2024-07-15 16:14:48.533856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:21.039 [2024-07-15 16:14:48.533886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.039 [2024-07-15 16:14:48.533922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:21.039 [2024-07-15 16:14:48.533963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.039 [2024-07-15 16:14:48.533988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:21.039 [2024-07-15 16:14:48.534018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.039 [2024-07-15 16:14:48.534040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:21.039 [2024-07-15 16:14:48.534070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.534093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.534122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.534144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.534175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.534196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.534225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.534247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.534276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.534311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.534340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.534361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.534390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.534411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.534438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.534461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.534490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.534512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.534540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.534562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.534599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.534623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.534652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.534673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.534706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.534730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.534764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.534789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.534822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.534849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.534881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.534904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.534934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.534964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.534997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.535020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.535049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.535072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.535101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.535123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.535152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.040 [2024-07-15 16:14:48.535175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.535204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.040 [2024-07-15 16:14:48.535226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.535261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.040 [2024-07-15 16:14:48.535284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.535315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.040 [2024-07-15 16:14:48.535338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.535834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.040 [2024-07-15 16:14:48.535870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.535913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.535938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.535985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.536010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.536047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.536072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.536109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.536133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.536169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.536193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.536230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.536255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.536291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.536315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.536350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.536373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.536409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.536434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.536470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.536498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.536533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.536557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.536593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.536618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.536653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.536678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.536713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.536738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.536774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.536798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.536835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.536859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.536895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.040 [2024-07-15 16:14:48.536919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:21.040 [2024-07-15 16:14:48.536963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.536988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.537023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.537047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.537081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.537106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.537142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.537167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.537203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.537237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.537274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.537299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.537335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.537360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.537395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.537419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.537454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.537478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.537513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.537551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.537585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.537609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.537642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.537665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.537700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.537723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.537756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.537780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.537814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.537837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.537873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.537896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.537931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.537954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.538003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.041 [2024-07-15 16:14:48.538027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.538061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.538084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.538118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.538141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.538175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.538198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.538251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.538276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.538315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.538340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.538378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.538403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.538440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.538465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.538502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.538528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.538565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.538590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.538628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.538654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.538692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.538717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.538760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.538786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.538827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.538853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.538892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.538918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.538961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.539002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.539040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.539065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.539102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.539127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.539164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.539190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.539227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.539252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.539302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.539326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.539363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.539387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.539426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.041 [2024-07-15 16:14:48.539451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:21.041 [2024-07-15 16:14:48.539738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.042 [2024-07-15 16:14:48.539770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.539827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.539855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.539901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.539941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.540008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.540035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.540078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.540104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.540148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.540174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.540217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.540244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.540305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.540334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.540394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.540421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.540466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.540493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.540539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.540566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.540611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.540638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.540683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.540710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.540754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.540787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.540832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.540859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.540905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.540932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.541002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.541031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.541076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.541102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.541146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.541172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.541215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.541241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.541297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.541322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.541363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.541389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.541431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.541457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.541500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.541526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.541568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.541593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.541634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.541664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.541707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.541732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.541772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.541796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.541838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.541864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.541907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.541933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.541999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.542028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.542073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.542100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:14:48.542145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:14:48.542172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:15:04.165669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.042 [2024-07-15 16:15:04.165748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:15:04.167900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:15:04.167928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:15:04.167965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:15:04.167985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:15:04.168010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:15:04.168027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:15:04.168050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:15:04.168066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:15:04.168100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:15:04.168117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:15:04.168140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:15:04.168156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:15:04.168179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.042 [2024-07-15 16:15:04.168195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:21.042 [2024-07-15 16:15:04.168217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.043 [2024-07-15 16:15:04.168504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.043 [2024-07-15 16:15:04.168542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.168948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.168977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.169008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.169026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.169048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.169068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.169091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.169108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.169130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.169147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.169168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.169184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.169207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.169223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.169245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.169261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.169283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.169299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.169321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.169337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.169359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.169375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.169397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.169414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.169436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.169452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.169473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.169489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.169511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.169528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:21.043 [2024-07-15 16:15:04.169554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.043 [2024-07-15 16:15:04.169571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:21.044 [2024-07-15 16:15:04.169593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.044 [2024-07-15 16:15:04.169609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:21.044 [2024-07-15 16:15:04.169633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.044 [2024-07-15 16:15:04.169649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:21.044 [2024-07-15 16:15:04.169672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.044 [2024-07-15 16:15:04.169688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:21.044 [2024-07-15 16:15:04.169711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.044 [2024-07-15 16:15:04.169727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:21.044 Received shutdown signal, test time was about 32.396678 seconds 00:22:21.044 00:22:21.044 Latency(us) 00:22:21.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.044 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:21.044 Verification LBA range: start 0x0 length 0x4000 00:22:21.044 Nvme0n1 : 32.40 8154.73 31.85 0.00 0.00 15669.11 403.53 4026531.84 00:22:21.044 =================================================================================================================== 00:22:21.044 Total : 8154.73 31.85 0.00 0.00 15669.11 403.53 4026531.84 00:22:21.044 16:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:21.301 rmmod nvme_tcp 00:22:21.301 rmmod nvme_fabrics 00:22:21.301 rmmod nvme_keyring 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 855005 ']' 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 855005 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 855005 ']' 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 855005 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.301 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 855005 00:22:21.557 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:21.557 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:21.557 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 855005' 00:22:21.557 killing process with pid 855005 00:22:21.557 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 855005 00:22:21.557 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 855005 00:22:21.813 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:21.813 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:21.813 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:21.813 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:21.813 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:21.813 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.813 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:21.813 16:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.709 16:15:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:23.709 00:22:23.709 real 0m41.031s 00:22:23.709 user 2m3.050s 00:22:23.709 sys 0m10.816s 00:22:23.709 16:15:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:23.709 16:15:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:23.709 ************************************ 00:22:23.709 END TEST nvmf_host_multipath_status 00:22:23.709 ************************************ 00:22:23.709 16:15:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:23.709 16:15:09 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:23.709 16:15:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:23.709 16:15:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:23.709 16:15:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:23.709 ************************************ 00:22:23.709 START TEST nvmf_discovery_remove_ifc 00:22:23.709 ************************************ 00:22:23.709 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:23.968 * Looking for test storage... 00:22:23.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:23.968 16:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:25.870 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:25.870 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:25.870 Found net devices under 0000:09:00.0: cvl_0_0 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:25.870 Found net devices under 0000:09:00.1: cvl_0_1 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:25.870 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:25.871 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:26.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:22:26.130 00:22:26.130 --- 10.0.0.2 ping statistics --- 00:22:26.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.130 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:22:26.130 00:22:26.130 --- 10.0.0.1 ping statistics --- 00:22:26.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.130 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=862126 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 862126 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 862126 ']' 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:26.130 16:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:26.130 [2024-07-15 16:15:11.975106] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:22:26.130 [2024-07-15 16:15:11.975185] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.130 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.130 [2024-07-15 16:15:12.041165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.389 [2024-07-15 16:15:12.155520] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.389 [2024-07-15 16:15:12.155574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.389 [2024-07-15 16:15:12.155588] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.389 [2024-07-15 16:15:12.155599] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.389 [2024-07-15 16:15:12.155608] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.389 [2024-07-15 16:15:12.155635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:26.389 [2024-07-15 16:15:12.303416] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.389 [2024-07-15 16:15:12.311577] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:26.389 null0 00:22:26.389 [2024-07-15 16:15:12.343482] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=862153 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 862153 /tmp/host.sock 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 862153 ']' 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:26.389 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:26.389 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:26.648 [2024-07-15 16:15:12.406824] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:22:26.648 [2024-07-15 16:15:12.406897] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862153 ] 00:22:26.648 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.648 [2024-07-15 16:15:12.463967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.648 [2024-07-15 16:15:12.568914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.906 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:26.906 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:26.906 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:26.906 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:26.906 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.906 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:26.906 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.906 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:26.906 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.906 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:26.906 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.906 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:26.906 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.906 16:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:27.837 [2024-07-15 16:15:13.822626] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:27.838 [2024-07-15 16:15:13.822663] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:27.838 [2024-07-15 16:15:13.822686] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:28.094 [2024-07-15 16:15:13.949105] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:28.351 [2024-07-15 16:15:14.174883] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:28.351 [2024-07-15 16:15:14.174952] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:28.351 [2024-07-15 16:15:14.175031] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:28.351 [2024-07-15 16:15:14.175056] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:28.351 [2024-07-15 16:15:14.175094] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:28.351 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.351 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:28.351 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:28.351 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:28.352 [2024-07-15 16:15:14.179765] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x124d870 was disconnected and freed. delete nvme_qpair. 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:28.352 16:15:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:29.727 16:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:29.727 16:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.727 16:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:29.727 16:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.727 16:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:29.727 16:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:29.727 16:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:29.727 16:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.727 16:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:29.727 16:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:30.657 16:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:30.657 16:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.657 16:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.657 16:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:30.657 16:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:30.657 16:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:30.657 16:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:30.657 16:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.657 16:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:30.657 16:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:31.588 16:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:31.588 16:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:31.588 16:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:31.588 16:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.588 16:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:31.588 16:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:31.588 16:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:31.588 16:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.588 16:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:31.588 16:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:32.520 16:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:32.520 16:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.520 16:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:32.520 16:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.520 16:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:32.520 16:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.520 16:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:32.520 16:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.520 16:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:32.520 16:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:33.892 16:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:33.892 16:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:33.892 16:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:33.892 16:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:33.893 16:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.893 16:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.893 16:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:33.893 16:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.893 16:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:33.893 16:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:33.893 [2024-07-15 16:15:19.616064] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:33.893 [2024-07-15 16:15:19.616125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.893 [2024-07-15 16:15:19.616146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.893 [2024-07-15 16:15:19.616162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.893 [2024-07-15 16:15:19.616184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.893 [2024-07-15 16:15:19.616198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.893 [2024-07-15 16:15:19.616210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.893 [2024-07-15 16:15:19.616224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.893 [2024-07-15 16:15:19.616268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.893 [2024-07-15 16:15:19.616282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.893 [2024-07-15 16:15:19.616294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.893 [2024-07-15 16:15:19.616305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1214300 is same with the state(5) to be set 00:22:33.893 [2024-07-15 16:15:19.626083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1214300 (9): Bad file descriptor 00:22:33.893 [2024-07-15 16:15:19.636130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:34.824 16:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:34.824 16:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.824 16:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:34.824 16:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.824 16:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:34.824 16:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:34.824 16:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:34.824 [2024-07-15 16:15:20.665999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:34.824 [2024-07-15 16:15:20.666067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1214300 with addr=10.0.0.2, port=4420 00:22:34.824 [2024-07-15 16:15:20.666091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1214300 is same with the state(5) to be set 00:22:34.824 [2024-07-15 16:15:20.666136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1214300 (9): Bad file descriptor 00:22:34.824 [2024-07-15 16:15:20.666575] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:34.824 [2024-07-15 16:15:20.666606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:34.824 [2024-07-15 16:15:20.666621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:34.824 [2024-07-15 16:15:20.666636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:34.824 [2024-07-15 16:15:20.666667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.824 [2024-07-15 16:15:20.666684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:34.824 16:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.824 16:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:34.824 16:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:35.754 [2024-07-15 16:15:21.669190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:35.754 [2024-07-15 16:15:21.669263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:35.754 [2024-07-15 16:15:21.669277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:35.754 [2024-07-15 16:15:21.669290] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:35.754 [2024-07-15 16:15:21.669319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:35.754 [2024-07-15 16:15:21.669363] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:35.754 [2024-07-15 16:15:21.669426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.754 [2024-07-15 16:15:21.669447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.754 [2024-07-15 16:15:21.669465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.754 [2024-07-15 16:15:21.669478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.754 [2024-07-15 16:15:21.669492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.754 [2024-07-15 16:15:21.669505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.754 [2024-07-15 16:15:21.669519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.754 [2024-07-15 16:15:21.669532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.754 [2024-07-15 16:15:21.669545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.754 [2024-07-15 16:15:21.669558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.754 [2024-07-15 16:15:21.669571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:35.754 [2024-07-15 16:15:21.669689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1213780 (9): Bad file descriptor 00:22:35.754 [2024-07-15 16:15:21.670701] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:35.754 [2024-07-15 16:15:21.670721] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:35.754 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:35.754 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.754 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:35.754 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.754 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:35.754 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.754 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:35.754 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.755 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:35.755 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:35.755 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:36.012 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:36.012 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:36.012 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.012 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.012 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:36.012 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:36.012 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:36.012 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:36.012 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.012 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:36.012 16:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:36.944 16:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:36.944 16:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.944 16:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.944 16:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:36.944 16:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:36.944 16:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:36.944 16:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:36.944 16:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.944 16:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:36.944 16:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:37.875 [2024-07-15 16:15:23.686394] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:37.875 [2024-07-15 16:15:23.686416] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:37.875 [2024-07-15 16:15:23.686439] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:37.875 [2024-07-15 16:15:23.772714] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:37.875 16:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:37.875 16:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.875 16:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:37.875 16:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.875 16:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:37.875 16:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:37.875 16:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:37.875 16:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.133 16:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:38.133 16:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:38.133 [2024-07-15 16:15:23.951961] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:38.133 [2024-07-15 16:15:23.952021] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:38.133 [2024-07-15 16:15:23.952055] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:38.133 [2024-07-15 16:15:23.952080] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:38.133 [2024-07-15 16:15:23.952094] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:38.133 [2024-07-15 16:15:23.996591] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x121b110 was disconnected and freed. delete nvme_qpair. 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 862153 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 862153 ']' 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 862153 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 862153 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 862153' 00:22:39.065 killing process with pid 862153 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 862153 00:22:39.065 16:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 862153 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:39.323 rmmod nvme_tcp 00:22:39.323 rmmod nvme_fabrics 00:22:39.323 rmmod nvme_keyring 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 862126 ']' 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 862126 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 862126 ']' 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 862126 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 862126 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 862126' 00:22:39.323 killing process with pid 862126 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 862126 00:22:39.323 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 862126 00:22:39.581 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:39.581 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:39.581 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:39.581 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:39.581 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:39.581 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.581 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.581 16:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.146 16:15:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:42.146 00:22:42.146 real 0m17.929s 00:22:42.146 user 0m26.090s 00:22:42.146 sys 0m3.050s 00:22:42.146 16:15:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:42.146 16:15:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:42.146 ************************************ 00:22:42.146 END TEST nvmf_discovery_remove_ifc 00:22:42.146 ************************************ 00:22:42.146 16:15:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:42.146 16:15:27 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:42.146 16:15:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:42.146 16:15:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.146 16:15:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:42.146 ************************************ 00:22:42.146 START TEST nvmf_identify_kernel_target 00:22:42.146 ************************************ 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:42.146 * Looking for test storage... 00:22:42.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.146 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:42.147 16:15:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:44.052 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:44.052 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.052 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:44.053 Found net devices under 0000:09:00.0: cvl_0_0 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:44.053 Found net devices under 0000:09:00.1: cvl_0_1 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:44.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:22:44.053 00:22:44.053 --- 10.0.0.2 ping statistics --- 00:22:44.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.053 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:22:44.053 00:22:44.053 --- 10.0.0.1 ping statistics --- 00:22:44.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.053 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:44.053 16:15:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:44.053 16:15:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:44.053 16:15:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:45.429 Waiting for block devices as requested 00:22:45.429 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:45.429 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:45.429 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:45.689 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:45.689 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:45.689 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:45.947 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:45.947 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:45.947 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:22:46.206 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:46.206 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:46.206 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:46.206 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:46.465 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:46.465 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:46.465 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:46.465 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:46.723 No valid GPT data, bailing 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:22:46.723 00:22:46.723 Discovery Log Number of Records 2, Generation counter 2 00:22:46.723 =====Discovery Log Entry 0====== 00:22:46.723 trtype: tcp 00:22:46.723 adrfam: ipv4 00:22:46.723 subtype: current discovery subsystem 00:22:46.723 treq: not specified, sq flow control disable supported 00:22:46.723 portid: 1 00:22:46.723 trsvcid: 4420 00:22:46.723 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:46.723 traddr: 10.0.0.1 00:22:46.723 eflags: none 00:22:46.723 sectype: none 00:22:46.723 =====Discovery Log Entry 1====== 00:22:46.723 trtype: tcp 00:22:46.723 adrfam: ipv4 00:22:46.723 subtype: nvme subsystem 00:22:46.723 treq: not specified, sq flow control disable supported 00:22:46.723 portid: 1 00:22:46.723 trsvcid: 4420 00:22:46.723 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:46.723 traddr: 10.0.0.1 00:22:46.723 eflags: none 00:22:46.723 sectype: none 00:22:46.723 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:46.723 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:46.723 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.984 ===================================================== 00:22:46.984 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:46.984 ===================================================== 00:22:46.984 Controller Capabilities/Features 00:22:46.984 ================================ 00:22:46.984 Vendor ID: 0000 00:22:46.984 Subsystem Vendor ID: 0000 00:22:46.984 Serial Number: b3e26a8f9e75531c47da 00:22:46.984 Model Number: Linux 00:22:46.984 Firmware Version: 6.7.0-68 00:22:46.984 Recommended Arb Burst: 0 00:22:46.984 IEEE OUI Identifier: 00 00 00 00:22:46.984 Multi-path I/O 00:22:46.984 May have multiple subsystem ports: No 00:22:46.984 May have multiple controllers: No 00:22:46.984 Associated with SR-IOV VF: No 00:22:46.984 Max Data Transfer Size: Unlimited 00:22:46.984 Max Number of Namespaces: 0 00:22:46.984 Max Number of I/O Queues: 1024 00:22:46.984 NVMe Specification Version (VS): 1.3 00:22:46.984 NVMe Specification Version (Identify): 1.3 00:22:46.984 Maximum Queue Entries: 1024 00:22:46.984 Contiguous Queues Required: No 00:22:46.984 Arbitration Mechanisms Supported 00:22:46.984 Weighted Round Robin: Not Supported 00:22:46.984 Vendor Specific: Not Supported 00:22:46.984 Reset Timeout: 7500 ms 00:22:46.984 Doorbell Stride: 4 bytes 00:22:46.984 NVM Subsystem Reset: Not Supported 00:22:46.984 Command Sets Supported 00:22:46.984 NVM Command Set: Supported 00:22:46.984 Boot Partition: Not Supported 00:22:46.984 Memory Page Size Minimum: 4096 bytes 00:22:46.984 Memory Page Size Maximum: 4096 bytes 00:22:46.984 Persistent Memory Region: Not Supported 00:22:46.984 Optional Asynchronous Events Supported 00:22:46.984 Namespace Attribute Notices: Not Supported 00:22:46.984 Firmware Activation Notices: Not Supported 00:22:46.984 ANA Change Notices: Not Supported 00:22:46.984 PLE Aggregate Log Change Notices: Not Supported 00:22:46.984 LBA Status Info Alert Notices: Not Supported 00:22:46.984 EGE Aggregate Log Change Notices: Not Supported 00:22:46.984 Normal NVM Subsystem Shutdown event: Not Supported 00:22:46.984 Zone Descriptor Change Notices: Not Supported 00:22:46.984 Discovery Log Change Notices: Supported 00:22:46.984 Controller Attributes 00:22:46.984 128-bit Host Identifier: Not Supported 00:22:46.984 Non-Operational Permissive Mode: Not Supported 00:22:46.984 NVM Sets: Not Supported 00:22:46.984 Read Recovery Levels: Not Supported 00:22:46.984 Endurance Groups: Not Supported 00:22:46.984 Predictable Latency Mode: Not Supported 00:22:46.984 Traffic Based Keep ALive: Not Supported 00:22:46.984 Namespace Granularity: Not Supported 00:22:46.984 SQ Associations: Not Supported 00:22:46.984 UUID List: Not Supported 00:22:46.984 Multi-Domain Subsystem: Not Supported 00:22:46.984 Fixed Capacity Management: Not Supported 00:22:46.984 Variable Capacity Management: Not Supported 00:22:46.984 Delete Endurance Group: Not Supported 00:22:46.984 Delete NVM Set: Not Supported 00:22:46.984 Extended LBA Formats Supported: Not Supported 00:22:46.984 Flexible Data Placement Supported: Not Supported 00:22:46.984 00:22:46.984 Controller Memory Buffer Support 00:22:46.984 ================================ 00:22:46.984 Supported: No 00:22:46.984 00:22:46.984 Persistent Memory Region Support 00:22:46.984 ================================ 00:22:46.984 Supported: No 00:22:46.984 00:22:46.984 Admin Command Set Attributes 00:22:46.984 ============================ 00:22:46.984 Security Send/Receive: Not Supported 00:22:46.984 Format NVM: Not Supported 00:22:46.984 Firmware Activate/Download: Not Supported 00:22:46.984 Namespace Management: Not Supported 00:22:46.984 Device Self-Test: Not Supported 00:22:46.984 Directives: Not Supported 00:22:46.984 NVMe-MI: Not Supported 00:22:46.984 Virtualization Management: Not Supported 00:22:46.984 Doorbell Buffer Config: Not Supported 00:22:46.984 Get LBA Status Capability: Not Supported 00:22:46.984 Command & Feature Lockdown Capability: Not Supported 00:22:46.984 Abort Command Limit: 1 00:22:46.984 Async Event Request Limit: 1 00:22:46.984 Number of Firmware Slots: N/A 00:22:46.984 Firmware Slot 1 Read-Only: N/A 00:22:46.984 Firmware Activation Without Reset: N/A 00:22:46.984 Multiple Update Detection Support: N/A 00:22:46.984 Firmware Update Granularity: No Information Provided 00:22:46.984 Per-Namespace SMART Log: No 00:22:46.984 Asymmetric Namespace Access Log Page: Not Supported 00:22:46.984 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:46.984 Command Effects Log Page: Not Supported 00:22:46.984 Get Log Page Extended Data: Supported 00:22:46.984 Telemetry Log Pages: Not Supported 00:22:46.984 Persistent Event Log Pages: Not Supported 00:22:46.984 Supported Log Pages Log Page: May Support 00:22:46.984 Commands Supported & Effects Log Page: Not Supported 00:22:46.984 Feature Identifiers & Effects Log Page:May Support 00:22:46.984 NVMe-MI Commands & Effects Log Page: May Support 00:22:46.984 Data Area 4 for Telemetry Log: Not Supported 00:22:46.984 Error Log Page Entries Supported: 1 00:22:46.984 Keep Alive: Not Supported 00:22:46.984 00:22:46.984 NVM Command Set Attributes 00:22:46.984 ========================== 00:22:46.984 Submission Queue Entry Size 00:22:46.984 Max: 1 00:22:46.984 Min: 1 00:22:46.984 Completion Queue Entry Size 00:22:46.984 Max: 1 00:22:46.984 Min: 1 00:22:46.984 Number of Namespaces: 0 00:22:46.984 Compare Command: Not Supported 00:22:46.984 Write Uncorrectable Command: Not Supported 00:22:46.984 Dataset Management Command: Not Supported 00:22:46.984 Write Zeroes Command: Not Supported 00:22:46.985 Set Features Save Field: Not Supported 00:22:46.985 Reservations: Not Supported 00:22:46.985 Timestamp: Not Supported 00:22:46.985 Copy: Not Supported 00:22:46.985 Volatile Write Cache: Not Present 00:22:46.985 Atomic Write Unit (Normal): 1 00:22:46.985 Atomic Write Unit (PFail): 1 00:22:46.985 Atomic Compare & Write Unit: 1 00:22:46.985 Fused Compare & Write: Not Supported 00:22:46.985 Scatter-Gather List 00:22:46.985 SGL Command Set: Supported 00:22:46.985 SGL Keyed: Not Supported 00:22:46.985 SGL Bit Bucket Descriptor: Not Supported 00:22:46.985 SGL Metadata Pointer: Not Supported 00:22:46.985 Oversized SGL: Not Supported 00:22:46.985 SGL Metadata Address: Not Supported 00:22:46.985 SGL Offset: Supported 00:22:46.985 Transport SGL Data Block: Not Supported 00:22:46.985 Replay Protected Memory Block: Not Supported 00:22:46.985 00:22:46.985 Firmware Slot Information 00:22:46.985 ========================= 00:22:46.985 Active slot: 0 00:22:46.985 00:22:46.985 00:22:46.985 Error Log 00:22:46.985 ========= 00:22:46.985 00:22:46.985 Active Namespaces 00:22:46.985 ================= 00:22:46.985 Discovery Log Page 00:22:46.985 ================== 00:22:46.985 Generation Counter: 2 00:22:46.985 Number of Records: 2 00:22:46.985 Record Format: 0 00:22:46.985 00:22:46.985 Discovery Log Entry 0 00:22:46.985 ---------------------- 00:22:46.985 Transport Type: 3 (TCP) 00:22:46.985 Address Family: 1 (IPv4) 00:22:46.985 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:46.985 Entry Flags: 00:22:46.985 Duplicate Returned Information: 0 00:22:46.985 Explicit Persistent Connection Support for Discovery: 0 00:22:46.985 Transport Requirements: 00:22:46.985 Secure Channel: Not Specified 00:22:46.985 Port ID: 1 (0x0001) 00:22:46.985 Controller ID: 65535 (0xffff) 00:22:46.985 Admin Max SQ Size: 32 00:22:46.985 Transport Service Identifier: 4420 00:22:46.985 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:46.985 Transport Address: 10.0.0.1 00:22:46.985 Discovery Log Entry 1 00:22:46.985 ---------------------- 00:22:46.985 Transport Type: 3 (TCP) 00:22:46.985 Address Family: 1 (IPv4) 00:22:46.985 Subsystem Type: 2 (NVM Subsystem) 00:22:46.985 Entry Flags: 00:22:46.985 Duplicate Returned Information: 0 00:22:46.985 Explicit Persistent Connection Support for Discovery: 0 00:22:46.985 Transport Requirements: 00:22:46.985 Secure Channel: Not Specified 00:22:46.985 Port ID: 1 (0x0001) 00:22:46.985 Controller ID: 65535 (0xffff) 00:22:46.985 Admin Max SQ Size: 32 00:22:46.985 Transport Service Identifier: 4420 00:22:46.985 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:46.985 Transport Address: 10.0.0.1 00:22:46.985 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:46.985 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.985 get_feature(0x01) failed 00:22:46.985 get_feature(0x02) failed 00:22:46.985 get_feature(0x04) failed 00:22:46.985 ===================================================== 00:22:46.985 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:46.985 ===================================================== 00:22:46.985 Controller Capabilities/Features 00:22:46.985 ================================ 00:22:46.985 Vendor ID: 0000 00:22:46.985 Subsystem Vendor ID: 0000 00:22:46.985 Serial Number: 3c03ef012c77f0893094 00:22:46.985 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:46.985 Firmware Version: 6.7.0-68 00:22:46.985 Recommended Arb Burst: 6 00:22:46.985 IEEE OUI Identifier: 00 00 00 00:22:46.985 Multi-path I/O 00:22:46.985 May have multiple subsystem ports: Yes 00:22:46.985 May have multiple controllers: Yes 00:22:46.985 Associated with SR-IOV VF: No 00:22:46.985 Max Data Transfer Size: Unlimited 00:22:46.985 Max Number of Namespaces: 1024 00:22:46.985 Max Number of I/O Queues: 128 00:22:46.985 NVMe Specification Version (VS): 1.3 00:22:46.985 NVMe Specification Version (Identify): 1.3 00:22:46.985 Maximum Queue Entries: 1024 00:22:46.985 Contiguous Queues Required: No 00:22:46.985 Arbitration Mechanisms Supported 00:22:46.985 Weighted Round Robin: Not Supported 00:22:46.985 Vendor Specific: Not Supported 00:22:46.985 Reset Timeout: 7500 ms 00:22:46.985 Doorbell Stride: 4 bytes 00:22:46.985 NVM Subsystem Reset: Not Supported 00:22:46.985 Command Sets Supported 00:22:46.985 NVM Command Set: Supported 00:22:46.985 Boot Partition: Not Supported 00:22:46.985 Memory Page Size Minimum: 4096 bytes 00:22:46.985 Memory Page Size Maximum: 4096 bytes 00:22:46.985 Persistent Memory Region: Not Supported 00:22:46.985 Optional Asynchronous Events Supported 00:22:46.985 Namespace Attribute Notices: Supported 00:22:46.985 Firmware Activation Notices: Not Supported 00:22:46.985 ANA Change Notices: Supported 00:22:46.985 PLE Aggregate Log Change Notices: Not Supported 00:22:46.985 LBA Status Info Alert Notices: Not Supported 00:22:46.985 EGE Aggregate Log Change Notices: Not Supported 00:22:46.985 Normal NVM Subsystem Shutdown event: Not Supported 00:22:46.985 Zone Descriptor Change Notices: Not Supported 00:22:46.985 Discovery Log Change Notices: Not Supported 00:22:46.985 Controller Attributes 00:22:46.985 128-bit Host Identifier: Supported 00:22:46.985 Non-Operational Permissive Mode: Not Supported 00:22:46.985 NVM Sets: Not Supported 00:22:46.985 Read Recovery Levels: Not Supported 00:22:46.985 Endurance Groups: Not Supported 00:22:46.985 Predictable Latency Mode: Not Supported 00:22:46.985 Traffic Based Keep ALive: Supported 00:22:46.985 Namespace Granularity: Not Supported 00:22:46.985 SQ Associations: Not Supported 00:22:46.985 UUID List: Not Supported 00:22:46.985 Multi-Domain Subsystem: Not Supported 00:22:46.985 Fixed Capacity Management: Not Supported 00:22:46.985 Variable Capacity Management: Not Supported 00:22:46.985 Delete Endurance Group: Not Supported 00:22:46.985 Delete NVM Set: Not Supported 00:22:46.985 Extended LBA Formats Supported: Not Supported 00:22:46.985 Flexible Data Placement Supported: Not Supported 00:22:46.985 00:22:46.985 Controller Memory Buffer Support 00:22:46.985 ================================ 00:22:46.985 Supported: No 00:22:46.985 00:22:46.985 Persistent Memory Region Support 00:22:46.985 ================================ 00:22:46.985 Supported: No 00:22:46.985 00:22:46.985 Admin Command Set Attributes 00:22:46.985 ============================ 00:22:46.985 Security Send/Receive: Not Supported 00:22:46.985 Format NVM: Not Supported 00:22:46.985 Firmware Activate/Download: Not Supported 00:22:46.985 Namespace Management: Not Supported 00:22:46.985 Device Self-Test: Not Supported 00:22:46.985 Directives: Not Supported 00:22:46.985 NVMe-MI: Not Supported 00:22:46.985 Virtualization Management: Not Supported 00:22:46.985 Doorbell Buffer Config: Not Supported 00:22:46.985 Get LBA Status Capability: Not Supported 00:22:46.985 Command & Feature Lockdown Capability: Not Supported 00:22:46.985 Abort Command Limit: 4 00:22:46.985 Async Event Request Limit: 4 00:22:46.985 Number of Firmware Slots: N/A 00:22:46.985 Firmware Slot 1 Read-Only: N/A 00:22:46.985 Firmware Activation Without Reset: N/A 00:22:46.985 Multiple Update Detection Support: N/A 00:22:46.985 Firmware Update Granularity: No Information Provided 00:22:46.985 Per-Namespace SMART Log: Yes 00:22:46.986 Asymmetric Namespace Access Log Page: Supported 00:22:46.986 ANA Transition Time : 10 sec 00:22:46.986 00:22:46.986 Asymmetric Namespace Access Capabilities 00:22:46.986 ANA Optimized State : Supported 00:22:46.986 ANA Non-Optimized State : Supported 00:22:46.986 ANA Inaccessible State : Supported 00:22:46.986 ANA Persistent Loss State : Supported 00:22:46.986 ANA Change State : Supported 00:22:46.986 ANAGRPID is not changed : No 00:22:46.986 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:46.986 00:22:46.986 ANA Group Identifier Maximum : 128 00:22:46.986 Number of ANA Group Identifiers : 128 00:22:46.986 Max Number of Allowed Namespaces : 1024 00:22:46.986 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:46.986 Command Effects Log Page: Supported 00:22:46.986 Get Log Page Extended Data: Supported 00:22:46.986 Telemetry Log Pages: Not Supported 00:22:46.986 Persistent Event Log Pages: Not Supported 00:22:46.986 Supported Log Pages Log Page: May Support 00:22:46.986 Commands Supported & Effects Log Page: Not Supported 00:22:46.986 Feature Identifiers & Effects Log Page:May Support 00:22:46.986 NVMe-MI Commands & Effects Log Page: May Support 00:22:46.986 Data Area 4 for Telemetry Log: Not Supported 00:22:46.986 Error Log Page Entries Supported: 128 00:22:46.986 Keep Alive: Supported 00:22:46.986 Keep Alive Granularity: 1000 ms 00:22:46.986 00:22:46.986 NVM Command Set Attributes 00:22:46.986 ========================== 00:22:46.986 Submission Queue Entry Size 00:22:46.986 Max: 64 00:22:46.986 Min: 64 00:22:46.986 Completion Queue Entry Size 00:22:46.986 Max: 16 00:22:46.986 Min: 16 00:22:46.986 Number of Namespaces: 1024 00:22:46.986 Compare Command: Not Supported 00:22:46.986 Write Uncorrectable Command: Not Supported 00:22:46.986 Dataset Management Command: Supported 00:22:46.986 Write Zeroes Command: Supported 00:22:46.986 Set Features Save Field: Not Supported 00:22:46.986 Reservations: Not Supported 00:22:46.986 Timestamp: Not Supported 00:22:46.986 Copy: Not Supported 00:22:46.986 Volatile Write Cache: Present 00:22:46.986 Atomic Write Unit (Normal): 1 00:22:46.986 Atomic Write Unit (PFail): 1 00:22:46.986 Atomic Compare & Write Unit: 1 00:22:46.986 Fused Compare & Write: Not Supported 00:22:46.986 Scatter-Gather List 00:22:46.986 SGL Command Set: Supported 00:22:46.986 SGL Keyed: Not Supported 00:22:46.986 SGL Bit Bucket Descriptor: Not Supported 00:22:46.986 SGL Metadata Pointer: Not Supported 00:22:46.986 Oversized SGL: Not Supported 00:22:46.986 SGL Metadata Address: Not Supported 00:22:46.986 SGL Offset: Supported 00:22:46.986 Transport SGL Data Block: Not Supported 00:22:46.986 Replay Protected Memory Block: Not Supported 00:22:46.986 00:22:46.986 Firmware Slot Information 00:22:46.986 ========================= 00:22:46.986 Active slot: 0 00:22:46.986 00:22:46.986 Asymmetric Namespace Access 00:22:46.986 =========================== 00:22:46.986 Change Count : 0 00:22:46.986 Number of ANA Group Descriptors : 1 00:22:46.986 ANA Group Descriptor : 0 00:22:46.986 ANA Group ID : 1 00:22:46.986 Number of NSID Values : 1 00:22:46.986 Change Count : 0 00:22:46.986 ANA State : 1 00:22:46.986 Namespace Identifier : 1 00:22:46.986 00:22:46.986 Commands Supported and Effects 00:22:46.986 ============================== 00:22:46.986 Admin Commands 00:22:46.986 -------------- 00:22:46.986 Get Log Page (02h): Supported 00:22:46.986 Identify (06h): Supported 00:22:46.986 Abort (08h): Supported 00:22:46.986 Set Features (09h): Supported 00:22:46.986 Get Features (0Ah): Supported 00:22:46.986 Asynchronous Event Request (0Ch): Supported 00:22:46.986 Keep Alive (18h): Supported 00:22:46.986 I/O Commands 00:22:46.986 ------------ 00:22:46.986 Flush (00h): Supported 00:22:46.986 Write (01h): Supported LBA-Change 00:22:46.986 Read (02h): Supported 00:22:46.986 Write Zeroes (08h): Supported LBA-Change 00:22:46.986 Dataset Management (09h): Supported 00:22:46.986 00:22:46.986 Error Log 00:22:46.986 ========= 00:22:46.986 Entry: 0 00:22:46.986 Error Count: 0x3 00:22:46.986 Submission Queue Id: 0x0 00:22:46.986 Command Id: 0x5 00:22:46.986 Phase Bit: 0 00:22:46.986 Status Code: 0x2 00:22:46.986 Status Code Type: 0x0 00:22:46.986 Do Not Retry: 1 00:22:46.986 Error Location: 0x28 00:22:46.986 LBA: 0x0 00:22:46.986 Namespace: 0x0 00:22:46.986 Vendor Log Page: 0x0 00:22:46.986 ----------- 00:22:46.986 Entry: 1 00:22:46.986 Error Count: 0x2 00:22:46.986 Submission Queue Id: 0x0 00:22:46.986 Command Id: 0x5 00:22:46.986 Phase Bit: 0 00:22:46.986 Status Code: 0x2 00:22:46.986 Status Code Type: 0x0 00:22:46.986 Do Not Retry: 1 00:22:46.986 Error Location: 0x28 00:22:46.986 LBA: 0x0 00:22:46.986 Namespace: 0x0 00:22:46.986 Vendor Log Page: 0x0 00:22:46.986 ----------- 00:22:46.986 Entry: 2 00:22:46.986 Error Count: 0x1 00:22:46.986 Submission Queue Id: 0x0 00:22:46.986 Command Id: 0x4 00:22:46.986 Phase Bit: 0 00:22:46.986 Status Code: 0x2 00:22:46.986 Status Code Type: 0x0 00:22:46.986 Do Not Retry: 1 00:22:46.986 Error Location: 0x28 00:22:46.986 LBA: 0x0 00:22:46.986 Namespace: 0x0 00:22:46.986 Vendor Log Page: 0x0 00:22:46.986 00:22:46.986 Number of Queues 00:22:46.986 ================ 00:22:46.986 Number of I/O Submission Queues: 128 00:22:46.986 Number of I/O Completion Queues: 128 00:22:46.986 00:22:46.986 ZNS Specific Controller Data 00:22:46.986 ============================ 00:22:46.986 Zone Append Size Limit: 0 00:22:46.986 00:22:46.986 00:22:46.986 Active Namespaces 00:22:46.986 ================= 00:22:46.986 get_feature(0x05) failed 00:22:46.986 Namespace ID:1 00:22:46.986 Command Set Identifier: NVM (00h) 00:22:46.986 Deallocate: Supported 00:22:46.986 Deallocated/Unwritten Error: Not Supported 00:22:46.986 Deallocated Read Value: Unknown 00:22:46.986 Deallocate in Write Zeroes: Not Supported 00:22:46.986 Deallocated Guard Field: 0xFFFF 00:22:46.986 Flush: Supported 00:22:46.986 Reservation: Not Supported 00:22:46.986 Namespace Sharing Capabilities: Multiple Controllers 00:22:46.986 Size (in LBAs): 1953525168 (931GiB) 00:22:46.986 Capacity (in LBAs): 1953525168 (931GiB) 00:22:46.986 Utilization (in LBAs): 1953525168 (931GiB) 00:22:46.986 UUID: ddc848eb-2faf-412e-9b6f-08dc9f666451 00:22:46.986 Thin Provisioning: Not Supported 00:22:46.986 Per-NS Atomic Units: Yes 00:22:46.986 Atomic Boundary Size (Normal): 0 00:22:46.986 Atomic Boundary Size (PFail): 0 00:22:46.986 Atomic Boundary Offset: 0 00:22:46.986 NGUID/EUI64 Never Reused: No 00:22:46.986 ANA group ID: 1 00:22:46.987 Namespace Write Protected: No 00:22:46.987 Number of LBA Formats: 1 00:22:46.987 Current LBA Format: LBA Format #00 00:22:46.987 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:46.987 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:46.987 rmmod nvme_tcp 00:22:46.987 rmmod nvme_fabrics 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.987 16:15:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.523 16:15:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:49.523 16:15:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:49.523 16:15:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:49.523 16:15:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:49.523 16:15:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:49.523 16:15:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:49.523 16:15:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:49.523 16:15:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:49.523 16:15:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:49.523 16:15:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:49.523 16:15:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:50.459 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:50.459 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:50.459 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:50.459 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:50.459 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:50.459 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:50.459 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:50.459 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:50.459 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:50.459 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:50.459 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:50.459 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:50.459 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:50.459 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:50.459 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:50.459 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:51.446 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:22:51.705 00:22:51.705 real 0m9.833s 00:22:51.705 user 0m2.126s 00:22:51.705 sys 0m3.475s 00:22:51.705 16:15:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:51.705 16:15:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.705 ************************************ 00:22:51.705 END TEST nvmf_identify_kernel_target 00:22:51.705 ************************************ 00:22:51.705 16:15:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:51.705 16:15:37 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:51.705 16:15:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:51.705 16:15:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:51.705 16:15:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:51.705 ************************************ 00:22:51.705 START TEST nvmf_auth_host 00:22:51.705 ************************************ 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:51.705 * Looking for test storage... 00:22:51.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.705 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:51.706 16:15:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.240 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:54.241 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:54.241 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:54.241 Found net devices under 0000:09:00.0: cvl_0_0 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:54.241 Found net devices under 0000:09:00.1: cvl_0_1 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:54.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:22:54.241 00:22:54.241 --- 10.0.0.2 ping statistics --- 00:22:54.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.241 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:22:54.241 00:22:54.241 --- 10.0.0.1 ping statistics --- 00:22:54.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.241 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=869356 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 869356 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 869356 ']' 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:54.241 16:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6c6658e8ec6266eced66c29af44c0f6f 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.err 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6c6658e8ec6266eced66c29af44c0f6f 0 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6c6658e8ec6266eced66c29af44c0f6f 0 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.241 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6c6658e8ec6266eced66c29af44c0f6f 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.err 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.err 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.err 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e633b6a3ad525580a3980b0ec68ca12d7ac8bcb5c806fc87e93226569cb65604 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Acp 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e633b6a3ad525580a3980b0ec68ca12d7ac8bcb5c806fc87e93226569cb65604 3 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e633b6a3ad525580a3980b0ec68ca12d7ac8bcb5c806fc87e93226569cb65604 3 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e633b6a3ad525580a3980b0ec68ca12d7ac8bcb5c806fc87e93226569cb65604 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:54.242 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.500 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Acp 00:22:54.500 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Acp 00:22:54.500 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Acp 00:22:54.500 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:54.500 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.500 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.500 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.500 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:54.500 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:54.500 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:54.500 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ece3d8463524276e5ce6191cea721182e76dfa2466957a5c 00:22:54.500 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:54.500 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.AVi 00:22:54.500 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ece3d8463524276e5ce6191cea721182e76dfa2466957a5c 0 00:22:54.500 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ece3d8463524276e5ce6191cea721182e76dfa2466957a5c 0 00:22:54.500 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.500 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ece3d8463524276e5ce6191cea721182e76dfa2466957a5c 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.AVi 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.AVi 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.AVi 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7436033bc8e1b47f68b48fa0dff35badd3639f3dab948b6e 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zR0 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7436033bc8e1b47f68b48fa0dff35badd3639f3dab948b6e 2 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7436033bc8e1b47f68b48fa0dff35badd3639f3dab948b6e 2 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7436033bc8e1b47f68b48fa0dff35badd3639f3dab948b6e 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zR0 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zR0 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.zR0 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dc2765ca7fed0a33ccdb5205da8bddf7 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.O9B 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dc2765ca7fed0a33ccdb5205da8bddf7 1 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dc2765ca7fed0a33ccdb5205da8bddf7 1 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dc2765ca7fed0a33ccdb5205da8bddf7 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.O9B 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.O9B 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.O9B 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=99f293599c40e9875905d35a5c0e0629 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.T2x 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 99f293599c40e9875905d35a5c0e0629 1 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 99f293599c40e9875905d35a5c0e0629 1 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=99f293599c40e9875905d35a5c0e0629 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.T2x 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.T2x 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.T2x 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e479c82e9152d9d2df910ee557c0944e49905081f5356781 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ro6 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e479c82e9152d9d2df910ee557c0944e49905081f5356781 2 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e479c82e9152d9d2df910ee557c0944e49905081f5356781 2 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e479c82e9152d9d2df910ee557c0944e49905081f5356781 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:54.501 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ro6 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ro6 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ro6 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fb598c1ca2237a9a58b394045865e3f8 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.tBl 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fb598c1ca2237a9a58b394045865e3f8 0 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fb598c1ca2237a9a58b394045865e3f8 0 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fb598c1ca2237a9a58b394045865e3f8 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.tBl 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.tBl 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.tBl 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a783406bcada152a685293b62e3b19b63951c8d20d15c8c2cfdf76efa7050f9d 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ftF 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a783406bcada152a685293b62e3b19b63951c8d20d15c8c2cfdf76efa7050f9d 3 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a783406bcada152a685293b62e3b19b63951c8d20d15c8c2cfdf76efa7050f9d 3 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a783406bcada152a685293b62e3b19b63951c8d20d15c8c2cfdf76efa7050f9d 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ftF 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ftF 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ftF 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 869356 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 869356 ']' 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:54.759 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.760 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:54.760 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.err 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Acp ]] 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Acp 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.AVi 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.zR0 ]] 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zR0 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.O9B 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.T2x ]] 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.T2x 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ro6 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.tBl ]] 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.tBl 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.018 16:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ftF 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:55.018 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:55.019 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:55.019 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:55.276 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:55.276 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:22:55.276 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:55.276 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:55.276 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:55.276 16:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:56.209 Waiting for block devices as requested 00:22:56.466 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:56.466 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:56.466 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:56.724 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:56.724 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:56.724 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:56.982 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:56.982 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:56.982 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:22:57.239 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:57.239 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:57.239 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:57.239 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:57.496 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:57.496 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:57.496 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:57.496 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:58.062 No valid GPT data, bailing 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:22:58.062 00:22:58.062 Discovery Log Number of Records 2, Generation counter 2 00:22:58.062 =====Discovery Log Entry 0====== 00:22:58.062 trtype: tcp 00:22:58.062 adrfam: ipv4 00:22:58.062 subtype: current discovery subsystem 00:22:58.062 treq: not specified, sq flow control disable supported 00:22:58.062 portid: 1 00:22:58.062 trsvcid: 4420 00:22:58.062 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:58.062 traddr: 10.0.0.1 00:22:58.062 eflags: none 00:22:58.062 sectype: none 00:22:58.062 =====Discovery Log Entry 1====== 00:22:58.062 trtype: tcp 00:22:58.062 adrfam: ipv4 00:22:58.062 subtype: nvme subsystem 00:22:58.062 treq: not specified, sq flow control disable supported 00:22:58.062 portid: 1 00:22:58.062 trsvcid: 4420 00:22:58.062 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:58.062 traddr: 10.0.0.1 00:22:58.062 eflags: none 00:22:58.062 sectype: none 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: ]] 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.062 16:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.321 nvme0n1 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: ]] 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.321 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.579 nvme0n1 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: ]] 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:22:58.579 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.580 nvme0n1 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.580 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: ]] 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.838 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.839 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.839 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.839 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.839 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.839 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.839 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.839 nvme0n1 00:22:58.839 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.839 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.839 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.839 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.839 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.839 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: ]] 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.097 16:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.097 nvme0n1 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:59.097 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:22:59.098 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.098 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:59.098 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:59.098 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:59.098 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.098 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:59.098 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.098 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.360 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.360 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.361 nvme0n1 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: ]] 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.361 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.620 nvme0n1 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: ]] 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.620 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.879 nvme0n1 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: ]] 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.879 16:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.137 nvme0n1 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: ]] 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.137 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.395 nvme0n1 00:23:00.395 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.395 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.395 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.395 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.395 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.395 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.395 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.395 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.395 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.395 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.395 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.395 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.395 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:00.395 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.395 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:00.395 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:00.395 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.396 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.654 nvme0n1 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: ]] 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.654 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.911 nvme0n1 00:23:00.911 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.911 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.911 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.911 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.911 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.911 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.911 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.911 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.911 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.911 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: ]] 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.169 16:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.427 nvme0n1 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: ]] 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.427 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.685 nvme0n1 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: ]] 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.685 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.944 nvme0n1 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.944 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.945 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.945 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.945 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.945 16:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.945 16:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:01.945 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.945 16:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.202 nvme0n1 00:23:02.202 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.202 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.202 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.202 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.202 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: ]] 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.459 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.023 nvme0n1 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: ]] 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.023 16:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.588 nvme0n1 00:23:03.588 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.588 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.588 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.588 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.588 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.588 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: ]] 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.589 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.154 nvme0n1 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: ]] 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:04.154 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.155 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.155 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.155 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.155 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.155 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.155 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.155 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.155 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.155 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.155 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.155 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.155 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.155 16:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.155 16:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:04.155 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.155 16:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.411 nvme0n1 00:23:04.411 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.411 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.411 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.411 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.411 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:04.668 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.669 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.233 nvme0n1 00:23:05.233 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.233 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: ]] 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.234 16:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.234 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.234 16:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.234 16:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.173 nvme0n1 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: ]] 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.173 16:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.118 nvme0n1 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: ]] 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.118 16:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.119 16:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.119 16:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.119 16:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.119 16:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.119 16:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.119 16:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.119 16:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.119 16:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.119 16:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.119 16:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.119 16:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.119 16:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.065 nvme0n1 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: ]] 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.065 16:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.630 nvme0n1 00:23:08.630 16:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.630 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.630 16:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.630 16:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.631 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.631 16:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.888 16:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.821 nvme0n1 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.821 nvme0n1 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.821 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.079 nvme0n1 00:23:10.079 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.079 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.079 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.079 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.079 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.079 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.079 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.079 16:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.079 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.079 16:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.079 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.079 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.079 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:10.079 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.079 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:10.079 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:10.079 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:10.079 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:10.079 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: ]] 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.080 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.338 nvme0n1 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: ]] 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.338 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.596 nvme0n1 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.596 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.854 nvme0n1 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: ]] 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.854 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.112 nvme0n1 00:23:11.112 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.112 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.112 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.112 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.112 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.112 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.112 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.112 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.112 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.112 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: ]] 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.113 16:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.371 nvme0n1 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: ]] 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.371 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.629 nvme0n1 00:23:11.629 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.629 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.629 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.629 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.629 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.629 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.629 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.629 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.629 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: ]] 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.630 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.888 nvme0n1 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.888 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.148 nvme0n1 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: ]] 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.148 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.149 16:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.149 16:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:12.149 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.149 16:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.407 nvme0n1 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:12.407 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: ]] 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.408 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.665 nvme0n1 00:23:12.665 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.665 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.665 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.666 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.666 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.666 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.666 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.666 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.666 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.666 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: ]] 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.923 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:12.924 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.924 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.924 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.924 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.924 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.924 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.924 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.924 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.924 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.924 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.924 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.924 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.924 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.924 16:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.924 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.924 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.924 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.181 nvme0n1 00:23:13.181 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.181 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.181 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.181 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.181 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.181 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.181 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.181 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.181 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.181 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.181 16:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.181 16:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: ]] 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.181 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.438 nvme0n1 00:23:13.438 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.438 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.438 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.438 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.438 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.438 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.438 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.438 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.438 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.438 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.438 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.438 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.438 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:13.438 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.438 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:13.438 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:13.438 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.439 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.696 nvme0n1 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: ]] 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.696 16:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.262 nvme0n1 00:23:14.262 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.262 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.262 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.262 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.262 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.262 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.262 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.262 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.262 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.262 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.520 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.520 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.520 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:14.520 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.520 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:14.520 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:14.520 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:14.520 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: ]] 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.521 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.779 nvme0n1 00:23:14.779 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.779 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.779 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.779 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.779 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.779 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: ]] 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.036 16:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.602 nvme0n1 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: ]] 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.602 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.859 nvme0n1 00:23:15.859 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.859 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.859 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.859 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.859 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.859 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.117 16:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.683 nvme0n1 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: ]] 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.683 16:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.618 nvme0n1 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: ]] 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.618 16:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.553 nvme0n1 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: ]] 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.553 16:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.119 nvme0n1 00:23:19.119 16:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.119 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.119 16:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.119 16:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.119 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.119 16:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: ]] 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.377 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.378 16:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.378 16:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.378 16:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.378 16:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.378 16:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.378 16:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.378 16:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.378 16:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.378 16:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.378 16:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.378 16:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:19.378 16:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.378 16:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.311 nvme0n1 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.311 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.881 nvme0n1 00:23:20.881 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:21.171 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: ]] 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.172 16:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.172 nvme0n1 00:23:21.172 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.172 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.172 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.172 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.172 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.172 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.172 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.172 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.172 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.172 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: ]] 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.432 nvme0n1 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: ]] 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.432 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.433 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.433 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.433 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.433 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.691 nvme0n1 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: ]] 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.691 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.692 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.692 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.692 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.692 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.692 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:21.692 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.692 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.950 nvme0n1 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:21.950 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.951 16:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.208 nvme0n1 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: ]] 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.208 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.468 nvme0n1 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: ]] 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.468 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.728 nvme0n1 00:23:22.728 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.728 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.728 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.728 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: ]] 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.729 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.988 nvme0n1 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: ]] 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.988 16:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.246 nvme0n1 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.246 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.504 nvme0n1 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: ]] 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.504 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.762 nvme0n1 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: ]] 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.762 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.763 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.763 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.763 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.763 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.763 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.763 16:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.763 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.763 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.763 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.020 nvme0n1 00:23:24.020 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.020 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.020 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.020 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.020 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.020 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.020 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.020 16:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.020 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.020 16:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.020 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.020 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.020 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:24.020 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.020 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: ]] 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.021 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.280 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.280 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.280 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.538 nvme0n1 00:23:24.538 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.538 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.538 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.538 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.538 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.538 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.538 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.538 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.538 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.538 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.538 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: ]] 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.539 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.799 nvme0n1 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.799 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.060 nvme0n1 00:23:25.060 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.060 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.060 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.060 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.060 16:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.060 16:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: ]] 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.060 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.625 nvme0n1 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: ]] 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:25.625 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.626 16:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.194 nvme0n1 00:23:26.194 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.194 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.194 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.194 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.194 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: ]] 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.195 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.763 nvme0n1 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: ]] 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.764 16:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.331 nvme0n1 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.331 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.897 nvme0n1 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM2NjU4ZThlYzYyNjZlY2VkNjZjMjlhZjQ0YzBmNmYpS1A3: 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: ]] 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYzM2I2YTNhZDUyNTU4MGEzOTgwYjBlYzY4Y2ExMmQ3YWM4YmNiNWM4MDZmYzg3ZTkzMjI2NTY5Y2I2NTYwNO2yIG4=: 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.897 16:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.835 nvme0n1 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:28.835 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: ]] 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.836 16:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.773 nvme0n1 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGMyNzY1Y2E3ZmVkMGEzM2NjZGI1MjA1ZGE4YmRkZjconO14: 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: ]] 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTlmMjkzNTk5YzQwZTk4NzU5MDVkMzVhNWMwZTA2MjnkNzGs: 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.773 16:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.714 nvme0n1 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ3OWM4MmU5MTUyZDlkMmRmOTEwZWU1NTdjMDk0NGU0OTkwNTA4MWY1MzU2NzgxAVEXIA==: 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: ]] 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmI1OThjMWNhMjIzN2E5YTU4YjM5NDA0NTg2NWUzZjiY/dj0: 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.714 16:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.652 nvme0n1 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTc4MzQwNmJjYWRhMTUyYTY4NTI5M2I2MmUzYjE5YjYzOTUxYzhkMjBkMTVjOGMyY2ZkZjc2ZWZhNzA1MGY5ZKX/Bx4=: 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.652 16:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.593 nvme0n1 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlM2Q4NDYzNTI0Mjc2ZTVjZTYxOTFjZWE3MjExODJlNzZkZmEyNDY2OTU3YTVjn8xUtw==: 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: ]] 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQzNjAzM2JjOGUxYjQ3ZjY4YjQ4ZmEwZGZmMzViYWRkMzYzOWYzZGFiOTQ4YjZlqYxB6g==: 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.593 request: 00:23:32.593 { 00:23:32.593 "name": "nvme0", 00:23:32.593 "trtype": "tcp", 00:23:32.593 "traddr": "10.0.0.1", 00:23:32.593 "adrfam": "ipv4", 00:23:32.593 "trsvcid": "4420", 00:23:32.593 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:32.593 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:32.593 "prchk_reftag": false, 00:23:32.593 "prchk_guard": false, 00:23:32.593 "hdgst": false, 00:23:32.593 "ddgst": false, 00:23:32.593 "method": "bdev_nvme_attach_controller", 00:23:32.593 "req_id": 1 00:23:32.593 } 00:23:32.593 Got JSON-RPC error response 00:23:32.593 response: 00:23:32.593 { 00:23:32.593 "code": -5, 00:23:32.593 "message": "Input/output error" 00:23:32.593 } 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.593 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.593 request: 00:23:32.593 { 00:23:32.593 "name": "nvme0", 00:23:32.594 "trtype": "tcp", 00:23:32.594 "traddr": "10.0.0.1", 00:23:32.594 "adrfam": "ipv4", 00:23:32.594 "trsvcid": "4420", 00:23:32.594 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:32.594 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:32.594 "prchk_reftag": false, 00:23:32.594 "prchk_guard": false, 00:23:32.594 "hdgst": false, 00:23:32.594 "ddgst": false, 00:23:32.594 "dhchap_key": "key2", 00:23:32.594 "method": "bdev_nvme_attach_controller", 00:23:32.594 "req_id": 1 00:23:32.594 } 00:23:32.594 Got JSON-RPC error response 00:23:32.594 response: 00:23:32.594 { 00:23:32.594 "code": -5, 00:23:32.594 "message": "Input/output error" 00:23:32.594 } 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.594 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.855 request: 00:23:32.855 { 00:23:32.855 "name": "nvme0", 00:23:32.855 "trtype": "tcp", 00:23:32.855 "traddr": "10.0.0.1", 00:23:32.855 "adrfam": "ipv4", 00:23:32.855 "trsvcid": "4420", 00:23:32.855 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:32.855 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:32.855 "prchk_reftag": false, 00:23:32.855 "prchk_guard": false, 00:23:32.855 "hdgst": false, 00:23:32.855 "ddgst": false, 00:23:32.855 "dhchap_key": "key1", 00:23:32.855 "dhchap_ctrlr_key": "ckey2", 00:23:32.855 "method": "bdev_nvme_attach_controller", 00:23:32.855 "req_id": 1 00:23:32.855 } 00:23:32.855 Got JSON-RPC error response 00:23:32.855 response: 00:23:32.855 { 00:23:32.855 "code": -5, 00:23:32.855 "message": "Input/output error" 00:23:32.855 } 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:32.855 rmmod nvme_tcp 00:23:32.855 rmmod nvme_fabrics 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 869356 ']' 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 869356 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 869356 ']' 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 869356 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 869356 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 869356' 00:23:32.855 killing process with pid 869356 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 869356 00:23:32.855 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 869356 00:23:33.115 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.115 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:33.115 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:33.115 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.115 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.115 16:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.115 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.115 16:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.020 16:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:35.020 16:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:35.020 16:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:35.020 16:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:35.020 16:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:35.020 16:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:35.020 16:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:35.020 16:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:35.020 16:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:35.020 16:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:35.020 16:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:35.020 16:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:35.277 16:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:36.661 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:36.661 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:36.661 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:36.661 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:36.661 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:36.661 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:36.661 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:36.661 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:36.661 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:36.661 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:36.661 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:36.661 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:36.661 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:36.661 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:36.661 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:36.661 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:37.627 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:23:37.627 16:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.err /tmp/spdk.key-null.AVi /tmp/spdk.key-sha256.O9B /tmp/spdk.key-sha384.ro6 /tmp/spdk.key-sha512.ftF /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:37.627 16:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:39.002 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:39.002 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:39.002 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:39.002 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:39.002 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:39.002 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:39.002 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:39.002 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:39.002 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:39.002 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:39.002 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:39.002 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:39.002 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:39.002 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:39.002 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:39.002 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:39.002 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:39.002 00:23:39.002 real 0m47.336s 00:23:39.002 user 0m44.799s 00:23:39.002 sys 0m5.916s 00:23:39.002 16:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:39.002 16:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.002 ************************************ 00:23:39.002 END TEST nvmf_auth_host 00:23:39.002 ************************************ 00:23:39.002 16:16:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:39.002 16:16:24 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:23:39.003 16:16:24 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:39.003 16:16:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:39.003 16:16:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:39.003 16:16:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.003 ************************************ 00:23:39.003 START TEST nvmf_digest 00:23:39.003 ************************************ 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:39.003 * Looking for test storage... 00:23:39.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.003 16:16:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.540 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:41.541 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:41.541 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:41.541 Found net devices under 0000:09:00.0: cvl_0_0 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:41.541 Found net devices under 0000:09:00.1: cvl_0_1 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.541 16:16:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.541 16:16:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:41.541 16:16:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.541 16:16:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.541 16:16:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.541 16:16:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:41.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:23:41.541 00:23:41.541 --- 10.0.0.2 ping statistics --- 00:23:41.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.541 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:23:41.541 16:16:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:23:41.541 00:23:41.541 --- 10.0.0.1 ping statistics --- 00:23:41.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.541 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:23:41.541 16:16:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.541 16:16:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:23:41.541 16:16:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:41.541 16:16:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.541 16:16:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:41.541 16:16:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:41.541 16:16:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.541 16:16:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:41.541 16:16:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:41.541 16:16:27 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:41.542 ************************************ 00:23:41.542 START TEST nvmf_digest_clean 00:23:41.542 ************************************ 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=878532 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 878532 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 878532 ']' 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:41.542 [2024-07-15 16:16:27.159189] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:23:41.542 [2024-07-15 16:16:27.159264] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.542 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.542 [2024-07-15 16:16:27.223652] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.542 [2024-07-15 16:16:27.323277] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.542 [2024-07-15 16:16:27.323332] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.542 [2024-07-15 16:16:27.323359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.542 [2024-07-15 16:16:27.323370] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.542 [2024-07-15 16:16:27.323379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.542 [2024-07-15 16:16:27.323414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:41.542 null0 00:23:41.542 [2024-07-15 16:16:27.491157] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.542 [2024-07-15 16:16:27.515398] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=878557 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 878557 /var/tmp/bperf.sock 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 878557 ']' 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:41.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.542 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:41.801 [2024-07-15 16:16:27.559729] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:23:41.801 [2024-07-15 16:16:27.559803] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid878557 ] 00:23:41.801 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.801 [2024-07-15 16:16:27.616788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.801 [2024-07-15 16:16:27.722114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.059 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.059 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:42.060 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:42.060 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:42.060 16:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:42.317 16:16:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:42.317 16:16:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:42.575 nvme0n1 00:23:42.575 16:16:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:42.575 16:16:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:42.575 Running I/O for 2 seconds... 00:23:45.113 00:23:45.113 Latency(us) 00:23:45.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.113 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:45.113 nvme0n1 : 2.01 19441.22 75.94 0.00 0.00 6574.25 3398.16 14272.28 00:23:45.113 =================================================================================================================== 00:23:45.113 Total : 19441.22 75.94 0.00 0.00 6574.25 3398.16 14272.28 00:23:45.113 0 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:45.113 | select(.opcode=="crc32c") 00:23:45.113 | "\(.module_name) \(.executed)"' 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 878557 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 878557 ']' 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 878557 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 878557 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 878557' 00:23:45.113 killing process with pid 878557 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 878557 00:23:45.113 Received shutdown signal, test time was about 2.000000 seconds 00:23:45.113 00:23:45.113 Latency(us) 00:23:45.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.113 =================================================================================================================== 00:23:45.113 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:45.113 16:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 878557 00:23:45.371 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:45.371 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:45.371 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:45.371 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:45.371 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:45.371 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:45.371 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:45.371 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=878961 00:23:45.371 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 878961 /var/tmp/bperf.sock 00:23:45.371 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:45.371 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 878961 ']' 00:23:45.371 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:45.371 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:45.371 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:45.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:45.371 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:45.371 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:45.371 [2024-07-15 16:16:31.176860] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:23:45.371 [2024-07-15 16:16:31.176931] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid878961 ] 00:23:45.371 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:45.371 Zero copy mechanism will not be used. 00:23:45.371 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.371 [2024-07-15 16:16:31.234373] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.371 [2024-07-15 16:16:31.338198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.629 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.629 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:45.629 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:45.629 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:45.629 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:45.887 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:45.887 16:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:46.451 nvme0n1 00:23:46.451 16:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:46.451 16:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:46.451 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:46.451 Zero copy mechanism will not be used. 00:23:46.451 Running I/O for 2 seconds... 00:23:48.352 00:23:48.352 Latency(us) 00:23:48.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.352 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:48.352 nvme0n1 : 2.00 6326.11 790.76 0.00 0.00 2525.31 673.56 9077.95 00:23:48.352 =================================================================================================================== 00:23:48.352 Total : 6326.11 790.76 0.00 0.00 2525.31 673.56 9077.95 00:23:48.352 0 00:23:48.352 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:48.352 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:48.352 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:48.352 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:48.352 | select(.opcode=="crc32c") 00:23:48.352 | "\(.module_name) \(.executed)"' 00:23:48.352 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:48.610 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:48.610 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:48.610 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:48.610 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:48.610 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 878961 00:23:48.610 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 878961 ']' 00:23:48.610 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 878961 00:23:48.610 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:48.869 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:48.869 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 878961 00:23:48.869 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:48.869 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:48.869 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 878961' 00:23:48.869 killing process with pid 878961 00:23:48.869 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 878961 00:23:48.869 Received shutdown signal, test time was about 2.000000 seconds 00:23:48.869 00:23:48.869 Latency(us) 00:23:48.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.869 =================================================================================================================== 00:23:48.869 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.869 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 878961 00:23:49.128 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:49.128 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:49.128 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:49.128 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:49.128 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:49.128 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:49.128 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:49.128 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=879488 00:23:49.128 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:49.128 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 879488 /var/tmp/bperf.sock 00:23:49.128 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 879488 ']' 00:23:49.128 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:49.128 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.128 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:49.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:49.128 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.128 16:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:49.128 [2024-07-15 16:16:34.941449] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:23:49.128 [2024-07-15 16:16:34.941523] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879488 ] 00:23:49.128 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.128 [2024-07-15 16:16:35.000464] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.128 [2024-07-15 16:16:35.105531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.387 16:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:49.387 16:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:49.387 16:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:49.387 16:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:49.387 16:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:49.645 16:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:49.645 16:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:50.211 nvme0n1 00:23:50.211 16:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:50.211 16:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:50.211 Running I/O for 2 seconds... 00:23:52.117 00:23:52.117 Latency(us) 00:23:52.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.117 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:52.117 nvme0n1 : 2.01 22430.71 87.62 0.00 0.00 5700.23 2609.30 9951.76 00:23:52.117 =================================================================================================================== 00:23:52.117 Total : 22430.71 87.62 0.00 0.00 5700.23 2609.30 9951.76 00:23:52.117 0 00:23:52.117 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:52.117 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:52.117 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:52.117 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:52.117 | select(.opcode=="crc32c") 00:23:52.117 | "\(.module_name) \(.executed)"' 00:23:52.117 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:52.375 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:52.375 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:52.375 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:52.375 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:52.375 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 879488 00:23:52.375 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 879488 ']' 00:23:52.375 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 879488 00:23:52.375 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:52.375 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.375 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 879488 00:23:52.375 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:52.375 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:52.375 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 879488' 00:23:52.375 killing process with pid 879488 00:23:52.375 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 879488 00:23:52.375 Received shutdown signal, test time was about 2.000000 seconds 00:23:52.375 00:23:52.375 Latency(us) 00:23:52.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.375 =================================================================================================================== 00:23:52.375 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.375 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 879488 00:23:52.634 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:23:52.634 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:52.634 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:52.634 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:52.634 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:52.634 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:52.634 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:52.634 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=879903 00:23:52.634 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:52.634 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 879903 /var/tmp/bperf.sock 00:23:52.634 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 879903 ']' 00:23:52.634 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:52.634 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:52.634 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:52.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:52.892 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:52.892 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:52.892 [2024-07-15 16:16:38.680579] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:23:52.892 [2024-07-15 16:16:38.680649] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879903 ] 00:23:52.892 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:52.892 Zero copy mechanism will not be used. 00:23:52.892 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.892 [2024-07-15 16:16:38.737401] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.892 [2024-07-15 16:16:38.839887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.892 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:52.892 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:52.892 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:52.892 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:52.892 16:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:53.460 16:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:53.460 16:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:53.717 nvme0n1 00:23:53.717 16:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:53.717 16:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:53.973 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:53.973 Zero copy mechanism will not be used. 00:23:53.973 Running I/O for 2 seconds... 00:23:55.906 00:23:55.906 Latency(us) 00:23:55.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.906 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:55.906 nvme0n1 : 2.00 6272.27 784.03 0.00 0.00 2543.60 1905.40 7184.69 00:23:55.906 =================================================================================================================== 00:23:55.906 Total : 6272.27 784.03 0.00 0.00 2543.60 1905.40 7184.69 00:23:55.906 0 00:23:55.906 16:16:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:55.906 16:16:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:55.906 16:16:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:55.906 16:16:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:55.906 | select(.opcode=="crc32c") 00:23:55.906 | "\(.module_name) \(.executed)"' 00:23:55.906 16:16:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:56.165 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:56.165 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:56.165 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:56.165 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:56.165 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 879903 00:23:56.165 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 879903 ']' 00:23:56.165 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 879903 00:23:56.165 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:56.165 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.165 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 879903 00:23:56.165 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:56.165 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:56.165 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 879903' 00:23:56.165 killing process with pid 879903 00:23:56.165 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 879903 00:23:56.165 Received shutdown signal, test time was about 2.000000 seconds 00:23:56.165 00:23:56.165 Latency(us) 00:23:56.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.165 =================================================================================================================== 00:23:56.165 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.165 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 879903 00:23:56.426 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 878532 00:23:56.426 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 878532 ']' 00:23:56.426 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 878532 00:23:56.426 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:56.426 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.426 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 878532 00:23:56.426 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:56.426 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:56.426 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 878532' 00:23:56.426 killing process with pid 878532 00:23:56.426 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 878532 00:23:56.426 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 878532 00:23:56.684 00:23:56.684 real 0m15.540s 00:23:56.684 user 0m30.828s 00:23:56.684 sys 0m4.207s 00:23:56.684 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:56.684 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:56.684 ************************************ 00:23:56.684 END TEST nvmf_digest_clean 00:23:56.684 ************************************ 00:23:56.684 16:16:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:23:56.684 16:16:42 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:23:56.684 16:16:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:56.684 16:16:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:56.684 16:16:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:56.941 ************************************ 00:23:56.941 START TEST nvmf_digest_error 00:23:56.941 ************************************ 00:23:56.941 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:23:56.941 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:23:56.941 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:56.941 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:56.941 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:56.941 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=880364 00:23:56.941 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:56.941 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 880364 00:23:56.941 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 880364 ']' 00:23:56.941 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.941 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.941 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.941 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.941 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:56.941 [2024-07-15 16:16:42.756791] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:23:56.941 [2024-07-15 16:16:42.756881] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.941 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.941 [2024-07-15 16:16:42.823403] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.941 [2024-07-15 16:16:42.925186] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.941 [2024-07-15 16:16:42.925258] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.941 [2024-07-15 16:16:42.925272] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.941 [2024-07-15 16:16:42.925304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.941 [2024-07-15 16:16:42.925313] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.941 [2024-07-15 16:16:42.925345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.198 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.198 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:23:57.198 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:57.198 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:57.198 16:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:57.198 [2024-07-15 16:16:43.013922] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:57.198 null0 00:23:57.198 [2024-07-15 16:16:43.129403] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.198 [2024-07-15 16:16:43.153656] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=880482 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 880482 /var/tmp/bperf.sock 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 880482 ']' 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:57.198 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.199 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:57.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:57.199 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.199 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:57.199 [2024-07-15 16:16:43.201710] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:23:57.199 [2024-07-15 16:16:43.201780] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid880482 ] 00:23:57.458 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.458 [2024-07-15 16:16:43.261070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.458 [2024-07-15 16:16:43.370414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.716 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.716 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:23:57.716 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:57.716 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:57.972 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:57.972 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.972 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:57.972 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.972 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:57.972 16:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:58.229 nvme0n1 00:23:58.229 16:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:58.229 16:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.229 16:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:58.229 16:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.229 16:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:58.229 16:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:58.489 Running I/O for 2 seconds... 00:23:58.489 [2024-07-15 16:16:44.340594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.489 [2024-07-15 16:16:44.340642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.489 [2024-07-15 16:16:44.340675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.489 [2024-07-15 16:16:44.355419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.489 [2024-07-15 16:16:44.355450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.489 [2024-07-15 16:16:44.355482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.489 [2024-07-15 16:16:44.371359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.489 [2024-07-15 16:16:44.371407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.489 [2024-07-15 16:16:44.371424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.489 [2024-07-15 16:16:44.387751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.489 [2024-07-15 16:16:44.387781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.489 [2024-07-15 16:16:44.387822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.489 [2024-07-15 16:16:44.401556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.489 [2024-07-15 16:16:44.401599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.489 [2024-07-15 16:16:44.401616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.489 [2024-07-15 16:16:44.414176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.489 [2024-07-15 16:16:44.414206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.489 [2024-07-15 16:16:44.414238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.489 [2024-07-15 16:16:44.432085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.489 [2024-07-15 16:16:44.432114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.489 [2024-07-15 16:16:44.432145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.489 [2024-07-15 16:16:44.446980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.489 [2024-07-15 16:16:44.447010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.489 [2024-07-15 16:16:44.447027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.489 [2024-07-15 16:16:44.458556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.489 [2024-07-15 16:16:44.458584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.489 [2024-07-15 16:16:44.458614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.489 [2024-07-15 16:16:44.476027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.489 [2024-07-15 16:16:44.476072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.489 [2024-07-15 16:16:44.476089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.489 [2024-07-15 16:16:44.491765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.489 [2024-07-15 16:16:44.491859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.489 [2024-07-15 16:16:44.491878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.504762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.504831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.504919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.517326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.517362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.517380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.532740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.532769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.532798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.549295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.549323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.549354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.561108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.561136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.561152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.576422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.576451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.576467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.592147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.592178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.592194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.605143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.605176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.605194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.621901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.621946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.621970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.633519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.633546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.633562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.649497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.649528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.649560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.664716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.664745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.664761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.680239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.680327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.680347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.692593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.692622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.692652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.708778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.708829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.708849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.723701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.723731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.723748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.736514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.736541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.736557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.748 [2024-07-15 16:16:44.750879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:58.748 [2024-07-15 16:16:44.750907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.748 [2024-07-15 16:16:44.750923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.006 [2024-07-15 16:16:44.765286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.006 [2024-07-15 16:16:44.765321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.006 [2024-07-15 16:16:44.765337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.006 [2024-07-15 16:16:44.781919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.006 [2024-07-15 16:16:44.781948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.006 [2024-07-15 16:16:44.781987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.006 [2024-07-15 16:16:44.797606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.006 [2024-07-15 16:16:44.797634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.006 [2024-07-15 16:16:44.797649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.006 [2024-07-15 16:16:44.808370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.006 [2024-07-15 16:16:44.808397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.006 [2024-07-15 16:16:44.808413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.006 [2024-07-15 16:16:44.824451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.006 [2024-07-15 16:16:44.824507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.006 [2024-07-15 16:16:44.824526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.006 [2024-07-15 16:16:44.838401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.007 [2024-07-15 16:16:44.838431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.007 [2024-07-15 16:16:44.838448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.007 [2024-07-15 16:16:44.850388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.007 [2024-07-15 16:16:44.850417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.007 [2024-07-15 16:16:44.850434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.007 [2024-07-15 16:16:44.866529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.007 [2024-07-15 16:16:44.866561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.007 [2024-07-15 16:16:44.866634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.007 [2024-07-15 16:16:44.877825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.007 [2024-07-15 16:16:44.877859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.007 [2024-07-15 16:16:44.877876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.007 [2024-07-15 16:16:44.895271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.007 [2024-07-15 16:16:44.895301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.007 [2024-07-15 16:16:44.895317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.007 [2024-07-15 16:16:44.911679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.007 [2024-07-15 16:16:44.911725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.007 [2024-07-15 16:16:44.911742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.007 [2024-07-15 16:16:44.926586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.007 [2024-07-15 16:16:44.926618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.007 [2024-07-15 16:16:44.926635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.007 [2024-07-15 16:16:44.938702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.007 [2024-07-15 16:16:44.938733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.007 [2024-07-15 16:16:44.938748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.007 [2024-07-15 16:16:44.951351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.007 [2024-07-15 16:16:44.951384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.007 [2024-07-15 16:16:44.951400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.007 [2024-07-15 16:16:44.963434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.007 [2024-07-15 16:16:44.963465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.007 [2024-07-15 16:16:44.963480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.007 [2024-07-15 16:16:44.976393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.007 [2024-07-15 16:16:44.976424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.007 [2024-07-15 16:16:44.976454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.007 [2024-07-15 16:16:44.990413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.007 [2024-07-15 16:16:44.990446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.007 [2024-07-15 16:16:44.990464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.007 [2024-07-15 16:16:45.006643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.007 [2024-07-15 16:16:45.006716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.007 [2024-07-15 16:16:45.006759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.264 [2024-07-15 16:16:45.017579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.017610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.017626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.033479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.033510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.033526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.047809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.047842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.047859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.060346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.060376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.060393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.074635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.074666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.074682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.091355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.091387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.091404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.105722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.105783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.105804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.118321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.118352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.118368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.132052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.132091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.132108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.144033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.144063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.144080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.161187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.161260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.161364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.174143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.174248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.174270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.185425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.185454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.185471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.200497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.200528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.200545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.212248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.212293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.212310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.226011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.226044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.226061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.238884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.238914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.238930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.250905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.250952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.250980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.265 [2024-07-15 16:16:45.262565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.265 [2024-07-15 16:16:45.262596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.265 [2024-07-15 16:16:45.262612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.277305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.277335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.277352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.289228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.289259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.289291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.301987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.302030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.302047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.315986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.316027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.316044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.331441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.331474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.331491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.343105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.343135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.343152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.357888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.357920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.357970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.374271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.374318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.374334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.386801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.386833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.386849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.403385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.403416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.403433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.417854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.417884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.417900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.430234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.430279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.430296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.445062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.445093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.445110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.461664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.461695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.461710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.476738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.476769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.476785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.491922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.491983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.492017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.508048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.508079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.508097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.525 [2024-07-15 16:16:45.523243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.525 [2024-07-15 16:16:45.523277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.525 [2024-07-15 16:16:45.523309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.784 [2024-07-15 16:16:45.534873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.784 [2024-07-15 16:16:45.534904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.784 [2024-07-15 16:16:45.534920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.784 [2024-07-15 16:16:45.546545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.784 [2024-07-15 16:16:45.546577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.784 [2024-07-15 16:16:45.546594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.784 [2024-07-15 16:16:45.560053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.784 [2024-07-15 16:16:45.560084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.784 [2024-07-15 16:16:45.560101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.784 [2024-07-15 16:16:45.573087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.784 [2024-07-15 16:16:45.573118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.784 [2024-07-15 16:16:45.573134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.784 [2024-07-15 16:16:45.585253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.784 [2024-07-15 16:16:45.585282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.784 [2024-07-15 16:16:45.585298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.784 [2024-07-15 16:16:45.599721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.784 [2024-07-15 16:16:45.599752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.784 [2024-07-15 16:16:45.599791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.784 [2024-07-15 16:16:45.615334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.784 [2024-07-15 16:16:45.615380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.784 [2024-07-15 16:16:45.615397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.784 [2024-07-15 16:16:45.631600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.784 [2024-07-15 16:16:45.631634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.784 [2024-07-15 16:16:45.631651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.784 [2024-07-15 16:16:45.646512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.784 [2024-07-15 16:16:45.646544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.784 [2024-07-15 16:16:45.646561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.784 [2024-07-15 16:16:45.658187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.784 [2024-07-15 16:16:45.658233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.784 [2024-07-15 16:16:45.658250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.784 [2024-07-15 16:16:45.674398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.784 [2024-07-15 16:16:45.674431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.784 [2024-07-15 16:16:45.674464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.784 [2024-07-15 16:16:45.689434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.784 [2024-07-15 16:16:45.689465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.784 [2024-07-15 16:16:45.689496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.784 [2024-07-15 16:16:45.706038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.784 [2024-07-15 16:16:45.706069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.784 [2024-07-15 16:16:45.706086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.785 [2024-07-15 16:16:45.721679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.785 [2024-07-15 16:16:45.721709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.785 [2024-07-15 16:16:45.721725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.785 [2024-07-15 16:16:45.736164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.785 [2024-07-15 16:16:45.736203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.785 [2024-07-15 16:16:45.736221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.785 [2024-07-15 16:16:45.748842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.785 [2024-07-15 16:16:45.748871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.785 [2024-07-15 16:16:45.748887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.785 [2024-07-15 16:16:45.764552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.785 [2024-07-15 16:16:45.764583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.785 [2024-07-15 16:16:45.764600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.785 [2024-07-15 16:16:45.776993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:23:59.785 [2024-07-15 16:16:45.777025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.785 [2024-07-15 16:16:45.777056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.044 [2024-07-15 16:16:45.790733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.044 [2024-07-15 16:16:45.790769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.044 [2024-07-15 16:16:45.790787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.044 [2024-07-15 16:16:45.804574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.044 [2024-07-15 16:16:45.804609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.044 [2024-07-15 16:16:45.804627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.044 [2024-07-15 16:16:45.816544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.044 [2024-07-15 16:16:45.816575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.044 [2024-07-15 16:16:45.816592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.044 [2024-07-15 16:16:45.829984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.044 [2024-07-15 16:16:45.830015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.044 [2024-07-15 16:16:45.830032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.044 [2024-07-15 16:16:45.846633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.044 [2024-07-15 16:16:45.846665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.044 [2024-07-15 16:16:45.846682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.044 [2024-07-15 16:16:45.863067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.044 [2024-07-15 16:16:45.863098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.044 [2024-07-15 16:16:45.863114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.044 [2024-07-15 16:16:45.879866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.044 [2024-07-15 16:16:45.879896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.044 [2024-07-15 16:16:45.879913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.044 [2024-07-15 16:16:45.895320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.044 [2024-07-15 16:16:45.895352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.044 [2024-07-15 16:16:45.895369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.044 [2024-07-15 16:16:45.907047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.044 [2024-07-15 16:16:45.907080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.044 [2024-07-15 16:16:45.907097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.044 [2024-07-15 16:16:45.921800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.044 [2024-07-15 16:16:45.921830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.044 [2024-07-15 16:16:45.921846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.044 [2024-07-15 16:16:45.937731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.044 [2024-07-15 16:16:45.937762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.044 [2024-07-15 16:16:45.937779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.044 [2024-07-15 16:16:45.952036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.044 [2024-07-15 16:16:45.952069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.044 [2024-07-15 16:16:45.952087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.044 [2024-07-15 16:16:45.966473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.044 [2024-07-15 16:16:45.966504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.044 [2024-07-15 16:16:45.966521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.045 [2024-07-15 16:16:45.978488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.045 [2024-07-15 16:16:45.978520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.045 [2024-07-15 16:16:45.978559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.045 [2024-07-15 16:16:45.994579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.045 [2024-07-15 16:16:45.994611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.045 [2024-07-15 16:16:45.994629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.045 [2024-07-15 16:16:46.009026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.045 [2024-07-15 16:16:46.009057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.045 [2024-07-15 16:16:46.009073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.045 [2024-07-15 16:16:46.020787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.045 [2024-07-15 16:16:46.020818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.045 [2024-07-15 16:16:46.020835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.045 [2024-07-15 16:16:46.036589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.045 [2024-07-15 16:16:46.036621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.045 [2024-07-15 16:16:46.036664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.304 [2024-07-15 16:16:46.053630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.304 [2024-07-15 16:16:46.053676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.304 [2024-07-15 16:16:46.053692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.304 [2024-07-15 16:16:46.069285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.304 [2024-07-15 16:16:46.069315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.304 [2024-07-15 16:16:46.069330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.304 [2024-07-15 16:16:46.085128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.304 [2024-07-15 16:16:46.085160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.304 [2024-07-15 16:16:46.085177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.304 [2024-07-15 16:16:46.099489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.304 [2024-07-15 16:16:46.099560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.304 [2024-07-15 16:16:46.099576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.304 [2024-07-15 16:16:46.114140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.305 [2024-07-15 16:16:46.114177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.305 [2024-07-15 16:16:46.114195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.305 [2024-07-15 16:16:46.132132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.305 [2024-07-15 16:16:46.132163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.305 [2024-07-15 16:16:46.132179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.305 [2024-07-15 16:16:46.143105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.305 [2024-07-15 16:16:46.143138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.305 [2024-07-15 16:16:46.143154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.305 [2024-07-15 16:16:46.158796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.305 [2024-07-15 16:16:46.158845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.305 [2024-07-15 16:16:46.158862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.305 [2024-07-15 16:16:46.174873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.305 [2024-07-15 16:16:46.174905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.305 [2024-07-15 16:16:46.174922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.305 [2024-07-15 16:16:46.189177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.305 [2024-07-15 16:16:46.189208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.305 [2024-07-15 16:16:46.189224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.305 [2024-07-15 16:16:46.205644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.305 [2024-07-15 16:16:46.205683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.305 [2024-07-15 16:16:46.205699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.305 [2024-07-15 16:16:46.222214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.305 [2024-07-15 16:16:46.222246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.305 [2024-07-15 16:16:46.222276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.305 [2024-07-15 16:16:46.233967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.305 [2024-07-15 16:16:46.234004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.305 [2024-07-15 16:16:46.234021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.305 [2024-07-15 16:16:46.249168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.305 [2024-07-15 16:16:46.249202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.305 [2024-07-15 16:16:46.249219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.305 [2024-07-15 16:16:46.265853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.305 [2024-07-15 16:16:46.265884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.305 [2024-07-15 16:16:46.265901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.305 [2024-07-15 16:16:46.280307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.305 [2024-07-15 16:16:46.280336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.305 [2024-07-15 16:16:46.280352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.305 [2024-07-15 16:16:46.296760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.305 [2024-07-15 16:16:46.296791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.305 [2024-07-15 16:16:46.296807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.562 [2024-07-15 16:16:46.308717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.563 [2024-07-15 16:16:46.308764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.563 [2024-07-15 16:16:46.308780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.563 [2024-07-15 16:16:46.322759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ded50) 00:24:00.563 [2024-07-15 16:16:46.322789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.563 [2024-07-15 16:16:46.322805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.563 00:24:00.563 Latency(us) 00:24:00.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.563 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:00.563 nvme0n1 : 2.01 17712.89 69.19 0.00 0.00 7217.03 3519.53 24466.77 00:24:00.563 =================================================================================================================== 00:24:00.563 Total : 17712.89 69.19 0.00 0.00 7217.03 3519.53 24466.77 00:24:00.563 0 00:24:00.563 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:00.563 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:00.563 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:00.563 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:00.563 | .driver_specific 00:24:00.563 | .nvme_error 00:24:00.563 | .status_code 00:24:00.563 | .command_transient_transport_error' 00:24:00.821 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 139 > 0 )) 00:24:00.821 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 880482 00:24:00.821 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 880482 ']' 00:24:00.821 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 880482 00:24:00.821 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:00.821 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.821 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 880482 00:24:00.821 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:00.821 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:00.821 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 880482' 00:24:00.821 killing process with pid 880482 00:24:00.821 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 880482 00:24:00.821 Received shutdown signal, test time was about 2.000000 seconds 00:24:00.821 00:24:00.821 Latency(us) 00:24:00.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.821 =================================================================================================================== 00:24:00.821 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.821 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 880482 00:24:01.079 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:01.079 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:01.079 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:01.079 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:01.079 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:01.079 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=880891 00:24:01.079 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:01.079 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 880891 /var/tmp/bperf.sock 00:24:01.079 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 880891 ']' 00:24:01.079 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:01.079 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.079 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:01.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:01.079 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.079 16:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:01.079 [2024-07-15 16:16:46.932290] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:24:01.079 [2024-07-15 16:16:46.932377] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid880891 ] 00:24:01.079 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:01.079 Zero copy mechanism will not be used. 00:24:01.079 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.079 [2024-07-15 16:16:46.989400] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.337 [2024-07-15 16:16:47.094026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.337 16:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.337 16:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:01.337 16:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:01.337 16:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:01.644 16:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:01.644 16:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.644 16:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:01.644 16:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.644 16:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:01.644 16:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:01.902 nvme0n1 00:24:01.902 16:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:01.902 16:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.902 16:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:01.902 16:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.902 16:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:01.902 16:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:02.160 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:02.160 Zero copy mechanism will not be used. 00:24:02.160 Running I/O for 2 seconds... 00:24:02.160 [2024-07-15 16:16:48.004793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.160 [2024-07-15 16:16:48.004847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.160 [2024-07-15 16:16:48.004866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.160 [2024-07-15 16:16:48.010822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.160 [2024-07-15 16:16:48.010856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.160 [2024-07-15 16:16:48.010874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.160 [2024-07-15 16:16:48.016596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.160 [2024-07-15 16:16:48.016629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.160 [2024-07-15 16:16:48.016647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.160 [2024-07-15 16:16:48.022208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.160 [2024-07-15 16:16:48.022250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.160 [2024-07-15 16:16:48.022270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.160 [2024-07-15 16:16:48.027815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.160 [2024-07-15 16:16:48.027848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.160 [2024-07-15 16:16:48.027865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.160 [2024-07-15 16:16:48.033426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.160 [2024-07-15 16:16:48.033459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.160 [2024-07-15 16:16:48.033476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.160 [2024-07-15 16:16:48.038915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.160 [2024-07-15 16:16:48.038970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.160 [2024-07-15 16:16:48.038991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.160 [2024-07-15 16:16:48.044671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.160 [2024-07-15 16:16:48.044702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.044719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.050350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.050383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.050400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.055913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.055965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.055986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.061597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.061628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.061645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.067174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.067205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.067222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.072816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.072846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.072863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.078365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.078396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.078413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.083972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.084019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.084037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.089717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.089747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.089763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.095419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.095451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.095468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.101109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.101141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.101158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.106818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.106850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.106867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.112437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.112468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.112485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.118012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.118044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.118066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.123672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.123703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.123720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.129391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.129422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.129439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.134974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.135005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.135022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.140836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.140867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.140883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.146617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.146648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.146664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.152170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.152204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.152222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.157883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.157914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.157945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.161 [2024-07-15 16:16:48.163630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.161 [2024-07-15 16:16:48.163660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.161 [2024-07-15 16:16:48.163678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.419 [2024-07-15 16:16:48.169404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.419 [2024-07-15 16:16:48.169441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.419 [2024-07-15 16:16:48.169458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.419 [2024-07-15 16:16:48.175042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.419 [2024-07-15 16:16:48.175075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.419 [2024-07-15 16:16:48.175092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.419 [2024-07-15 16:16:48.180723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.419 [2024-07-15 16:16:48.180769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.419 [2024-07-15 16:16:48.180786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.419 [2024-07-15 16:16:48.186549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.419 [2024-07-15 16:16:48.186580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.419 [2024-07-15 16:16:48.186597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.419 [2024-07-15 16:16:48.192179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.419 [2024-07-15 16:16:48.192226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.419 [2024-07-15 16:16:48.192243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.419 [2024-07-15 16:16:48.197870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.419 [2024-07-15 16:16:48.197916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.419 [2024-07-15 16:16:48.197932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.419 [2024-07-15 16:16:48.203608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.419 [2024-07-15 16:16:48.203652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.203669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.209248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.209279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.209311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.214912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.214943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.214986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.220701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.220732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.220749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.226347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.226377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.226393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.231971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.232003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.232021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.237754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.237784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.237801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.243357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.243389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.243406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.248851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.248882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.248914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.254479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.254510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.254527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.260138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.260170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.260188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.266035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.266073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.266092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.271593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.271639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.271656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.277223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.277254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.277271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.282987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.283018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.283034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.288635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.288665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.288682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.294290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.294321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.294337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.299865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.299911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.299928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.305575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.305605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.305622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.311042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.311074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.311092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.316590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.316621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.316638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.322181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.322212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.322229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.327722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.327768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.327785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.333282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.333313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.333330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.339019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.339053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.339070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.344761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.344791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.344807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.350312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.350343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.350360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.355866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.355897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.355913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.361608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.361639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.420 [2024-07-15 16:16:48.361660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.420 [2024-07-15 16:16:48.367194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.420 [2024-07-15 16:16:48.367225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.421 [2024-07-15 16:16:48.367242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.421 [2024-07-15 16:16:48.373027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.421 [2024-07-15 16:16:48.373068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.421 [2024-07-15 16:16:48.373095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.421 [2024-07-15 16:16:48.378306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.421 [2024-07-15 16:16:48.378338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.421 [2024-07-15 16:16:48.378368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.421 [2024-07-15 16:16:48.384037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.421 [2024-07-15 16:16:48.384080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.421 [2024-07-15 16:16:48.384109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.421 [2024-07-15 16:16:48.390160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.421 [2024-07-15 16:16:48.390194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.421 [2024-07-15 16:16:48.390212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.421 [2024-07-15 16:16:48.396530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.421 [2024-07-15 16:16:48.396563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.421 [2024-07-15 16:16:48.396603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.421 [2024-07-15 16:16:48.401925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.421 [2024-07-15 16:16:48.401984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.421 [2024-07-15 16:16:48.402005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.421 [2024-07-15 16:16:48.407932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.421 [2024-07-15 16:16:48.407975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.421 [2024-07-15 16:16:48.407997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.421 [2024-07-15 16:16:48.414174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.421 [2024-07-15 16:16:48.414213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.421 [2024-07-15 16:16:48.414249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.421 [2024-07-15 16:16:48.420682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.421 [2024-07-15 16:16:48.420715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.421 [2024-07-15 16:16:48.420739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.426803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.426836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.426854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.431289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.431340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.431362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.436107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.436140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.436158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.442141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.442174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.442208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.448124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.448157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.448175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.453778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.453809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.453826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.459576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.459620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.459642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.465254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.465302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.465319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.470996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.471043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.471060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.476597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.476630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.476647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.482257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.482289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.482323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.487898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.487945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.487986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.493594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.493626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.493643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.499137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.499170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.499188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.504670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.504702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.504735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.510457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.510492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.510509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.516163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.516196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.516214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.521845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.521892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.521908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.527613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.527657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.527674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.533648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.533679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.533696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.539493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.539525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.539556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.545219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.545268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.545285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.551034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.551080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.551097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.556769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.556802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.556834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.562493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.562526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.562543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.568240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.568289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.568306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.573870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.680 [2024-07-15 16:16:48.573916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.680 [2024-07-15 16:16:48.573934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.680 [2024-07-15 16:16:48.579620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.579667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.579684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.681 [2024-07-15 16:16:48.585303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.585350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.585367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.681 [2024-07-15 16:16:48.591076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.591109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.591142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.681 [2024-07-15 16:16:48.596893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.596940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.596966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.681 [2024-07-15 16:16:48.602735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.602781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.602798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.681 [2024-07-15 16:16:48.608713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.608759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.608780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.681 [2024-07-15 16:16:48.614557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.614606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.614624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.681 [2024-07-15 16:16:48.620219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.620253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.620287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.681 [2024-07-15 16:16:48.625801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.625832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.625849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.681 [2024-07-15 16:16:48.631398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.631446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.631464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.681 [2024-07-15 16:16:48.637296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.637329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.637361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.681 [2024-07-15 16:16:48.642871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.642901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.642919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.681 [2024-07-15 16:16:48.648602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.648632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.648649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.681 [2024-07-15 16:16:48.654632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.654680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.654698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.681 [2024-07-15 16:16:48.660487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.660527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.660544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.681 [2024-07-15 16:16:48.666377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.666406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.666423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.681 [2024-07-15 16:16:48.672132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.672163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.672180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.681 [2024-07-15 16:16:48.677843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.681 [2024-07-15 16:16:48.677889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.681 [2024-07-15 16:16:48.677906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.942 [2024-07-15 16:16:48.683642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.942 [2024-07-15 16:16:48.683675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.942 [2024-07-15 16:16:48.683692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.942 [2024-07-15 16:16:48.689313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.942 [2024-07-15 16:16:48.689358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.942 [2024-07-15 16:16:48.689375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.942 [2024-07-15 16:16:48.694848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.942 [2024-07-15 16:16:48.694895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.942 [2024-07-15 16:16:48.694912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.942 [2024-07-15 16:16:48.700601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.942 [2024-07-15 16:16:48.700651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.942 [2024-07-15 16:16:48.700669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.942 [2024-07-15 16:16:48.706374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.706405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.706422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.712045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.712077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.712094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.717853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.717908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.717923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.723737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.723767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.723783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.729494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.729526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.729543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.735240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.735288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.735305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.740897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.740929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.740969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.746648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.746695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.746713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.752382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.752414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.752432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.758163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.758197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.758220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.763895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.763927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.763967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.769709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.769743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.769761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.775637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.775670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.775702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.781431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.781464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.781481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.787342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.787374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.787391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.793180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.793227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.793245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.798970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.799017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.799037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.804699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.804733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.804751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.810493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.810525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.810556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.816235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.816281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.816298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.821930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.821971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.822007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.827634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.827684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.827701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.833459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.833506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.833523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.839251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.839299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.839317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.844975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.845006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.845023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.850832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.850864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.850881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.943 [2024-07-15 16:16:48.856635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.943 [2024-07-15 16:16:48.856682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.943 [2024-07-15 16:16:48.856704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.944 [2024-07-15 16:16:48.862310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.944 [2024-07-15 16:16:48.862342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.944 [2024-07-15 16:16:48.862359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.944 [2024-07-15 16:16:48.867897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.944 [2024-07-15 16:16:48.867930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.944 [2024-07-15 16:16:48.867974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.944 [2024-07-15 16:16:48.873662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.944 [2024-07-15 16:16:48.873693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.944 [2024-07-15 16:16:48.873712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.944 [2024-07-15 16:16:48.879488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.944 [2024-07-15 16:16:48.879522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.944 [2024-07-15 16:16:48.879541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.944 [2024-07-15 16:16:48.885305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.944 [2024-07-15 16:16:48.885354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.944 [2024-07-15 16:16:48.885373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.944 [2024-07-15 16:16:48.891215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.944 [2024-07-15 16:16:48.891267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.944 [2024-07-15 16:16:48.891284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.944 [2024-07-15 16:16:48.897073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.944 [2024-07-15 16:16:48.897121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.944 [2024-07-15 16:16:48.897139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.944 [2024-07-15 16:16:48.902991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.944 [2024-07-15 16:16:48.903039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.944 [2024-07-15 16:16:48.903057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.944 [2024-07-15 16:16:48.909088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.944 [2024-07-15 16:16:48.909126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.944 [2024-07-15 16:16:48.909143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.944 [2024-07-15 16:16:48.914883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.944 [2024-07-15 16:16:48.914916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.944 [2024-07-15 16:16:48.914948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.944 [2024-07-15 16:16:48.920720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.944 [2024-07-15 16:16:48.920753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.944 [2024-07-15 16:16:48.920771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.944 [2024-07-15 16:16:48.926376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.944 [2024-07-15 16:16:48.926410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.944 [2024-07-15 16:16:48.926427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.944 [2024-07-15 16:16:48.932027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.944 [2024-07-15 16:16:48.932059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.944 [2024-07-15 16:16:48.932077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.944 [2024-07-15 16:16:48.937691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.944 [2024-07-15 16:16:48.937723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.944 [2024-07-15 16:16:48.937740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.944 [2024-07-15 16:16:48.943393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:02.944 [2024-07-15 16:16:48.943426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.944 [2024-07-15 16:16:48.943459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.204 [2024-07-15 16:16:48.948951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.204 [2024-07-15 16:16:48.948991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.204 [2024-07-15 16:16:48.949008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.204 [2024-07-15 16:16:48.954760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.204 [2024-07-15 16:16:48.954793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.204 [2024-07-15 16:16:48.954811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.204 [2024-07-15 16:16:48.960482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.204 [2024-07-15 16:16:48.960513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.204 [2024-07-15 16:16:48.960530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.204 [2024-07-15 16:16:48.966286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.204 [2024-07-15 16:16:48.966332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.204 [2024-07-15 16:16:48.966350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.204 [2024-07-15 16:16:48.972094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.204 [2024-07-15 16:16:48.972142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.204 [2024-07-15 16:16:48.972159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.204 [2024-07-15 16:16:48.977913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.204 [2024-07-15 16:16:48.977967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.204 [2024-07-15 16:16:48.977987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.204 [2024-07-15 16:16:48.983743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.204 [2024-07-15 16:16:48.983776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.204 [2024-07-15 16:16:48.983809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.204 [2024-07-15 16:16:48.989510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.204 [2024-07-15 16:16:48.989559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.204 [2024-07-15 16:16:48.989577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.204 [2024-07-15 16:16:48.995302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.204 [2024-07-15 16:16:48.995334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.204 [2024-07-15 16:16:48.995351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.204 [2024-07-15 16:16:49.001028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.204 [2024-07-15 16:16:49.001077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.204 [2024-07-15 16:16:49.001095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.204 [2024-07-15 16:16:49.006702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.204 [2024-07-15 16:16:49.006736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.204 [2024-07-15 16:16:49.006774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.204 [2024-07-15 16:16:49.012340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.204 [2024-07-15 16:16:49.012373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.204 [2024-07-15 16:16:49.012391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.204 [2024-07-15 16:16:49.018040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.204 [2024-07-15 16:16:49.018074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.204 [2024-07-15 16:16:49.018107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.023620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.023655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.023673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.029273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.029306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.029323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.035047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.035080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.035111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.040817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.040850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.040882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.046544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.046578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.046611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.052211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.052262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.052280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.057878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.057916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.057934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.063716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.063751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.063769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.069384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.069416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.069432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.075054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.075087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.075105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.080698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.080732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.080750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.086378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.086410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.086427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.092119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.092152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.092170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.098002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.098035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.098053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.103712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.103744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.103762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.109630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.109683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.109701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.115314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.115347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.115364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.120980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.121028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.121046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.126581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.126627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.126644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.132259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.132305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.132324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.137835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.137884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.137901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.143471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.143519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.143537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.149078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.149112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.149130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.154806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.154838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.154861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.160513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.160560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.160578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.166259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.166292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.166310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.171916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.171970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.171990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.177780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.177829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.205 [2024-07-15 16:16:49.177847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.205 [2024-07-15 16:16:49.183269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.205 [2024-07-15 16:16:49.183303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.206 [2024-07-15 16:16:49.183320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.206 [2024-07-15 16:16:49.189225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.206 [2024-07-15 16:16:49.189274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.206 [2024-07-15 16:16:49.189292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.206 [2024-07-15 16:16:49.192717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.206 [2024-07-15 16:16:49.192749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.206 [2024-07-15 16:16:49.192765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.206 [2024-07-15 16:16:49.198747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.206 [2024-07-15 16:16:49.198779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.206 [2024-07-15 16:16:49.198796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.206 [2024-07-15 16:16:49.204617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.206 [2024-07-15 16:16:49.204664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.206 [2024-07-15 16:16:49.204681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.468 [2024-07-15 16:16:49.210544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.468 [2024-07-15 16:16:49.210576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-07-15 16:16:49.210593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.468 [2024-07-15 16:16:49.216040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.468 [2024-07-15 16:16:49.216074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-07-15 16:16:49.216093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.468 [2024-07-15 16:16:49.221781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.468 [2024-07-15 16:16:49.221815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-07-15 16:16:49.221833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.468 [2024-07-15 16:16:49.227529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.468 [2024-07-15 16:16:49.227576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-07-15 16:16:49.227592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.468 [2024-07-15 16:16:49.233263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.468 [2024-07-15 16:16:49.233297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-07-15 16:16:49.233329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.468 [2024-07-15 16:16:49.239060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.468 [2024-07-15 16:16:49.239093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-07-15 16:16:49.239110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.468 [2024-07-15 16:16:49.244932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.468 [2024-07-15 16:16:49.244974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-07-15 16:16:49.245009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.468 [2024-07-15 16:16:49.250632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.468 [2024-07-15 16:16:49.250665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-07-15 16:16:49.250688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.468 [2024-07-15 16:16:49.256378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.468 [2024-07-15 16:16:49.256427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-07-15 16:16:49.256445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.468 [2024-07-15 16:16:49.262174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.468 [2024-07-15 16:16:49.262208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-07-15 16:16:49.262226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.468 [2024-07-15 16:16:49.267971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.468 [2024-07-15 16:16:49.268019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-07-15 16:16:49.268037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.468 [2024-07-15 16:16:49.273641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.468 [2024-07-15 16:16:49.273675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.468 [2024-07-15 16:16:49.273692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.468 [2024-07-15 16:16:49.279321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.279370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.279387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.284980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.285014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.285032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.290633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.290665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.290682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.296400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.296447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.296464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.302051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.302090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.302108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.307906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.307960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.307981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.313692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.313726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.313744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.319505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.319539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.319557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.325367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.325419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.325448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.330724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.330777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.330797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.336395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.336428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.336445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.342183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.342216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.342234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.347726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.347759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.347777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.351839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.351881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.351911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.356526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.356559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.356576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.362278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.362326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.362344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.367925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.367965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.367985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.373673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.373705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.373722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.379400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.379433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.379470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.385059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.385091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.385108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.390813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.390846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.390863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.396669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.396717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.396744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.402538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.402570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.402587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.408361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.408393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.408409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.414064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.414112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.414130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.419847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.419889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.419921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.425587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.425619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.425637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.431348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.431381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.431400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.437051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.469 [2024-07-15 16:16:49.437092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.469 [2024-07-15 16:16:49.437110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.469 [2024-07-15 16:16:49.442768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.470 [2024-07-15 16:16:49.442799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-07-15 16:16:49.442816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.470 [2024-07-15 16:16:49.448636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.470 [2024-07-15 16:16:49.448688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-07-15 16:16:49.448705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.470 [2024-07-15 16:16:49.454647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.470 [2024-07-15 16:16:49.454677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-07-15 16:16:49.454694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.470 [2024-07-15 16:16:49.460493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.470 [2024-07-15 16:16:49.460525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-07-15 16:16:49.460542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.470 [2024-07-15 16:16:49.466432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.470 [2024-07-15 16:16:49.466464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.470 [2024-07-15 16:16:49.466481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.472282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.472330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.472347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.478013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.478061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.478078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.483704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.483735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.483752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.489486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.489534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.489552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.495286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.495335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.495352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.501073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.501106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.501124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.506786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.506820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.506853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.512509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.512541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.512573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.518452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.518483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.518499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.524327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.524361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.524379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.530128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.530160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.530177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.535749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.535799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.535817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.541533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.541584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.541602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.547190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.547224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.547248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.552834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.552868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.552885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.558604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.558654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.558673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.564407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.564440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.564457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.571192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.571252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.571281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.578422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.578455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.578472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.586044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.586077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.586095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.593821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.593870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.593888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.601623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.601661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.601678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.609415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.609469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.609487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.616544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.616582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.616611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.620878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.620910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.620928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.628519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.628552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.731 [2024-07-15 16:16:49.628569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.731 [2024-07-15 16:16:49.636370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.731 [2024-07-15 16:16:49.636401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.732 [2024-07-15 16:16:49.636417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.732 [2024-07-15 16:16:49.644261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.732 [2024-07-15 16:16:49.644292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.732 [2024-07-15 16:16:49.644323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.732 [2024-07-15 16:16:49.652171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.732 [2024-07-15 16:16:49.652227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.732 [2024-07-15 16:16:49.652257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.732 [2024-07-15 16:16:49.658404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.732 [2024-07-15 16:16:49.658444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.732 [2024-07-15 16:16:49.658466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.732 [2024-07-15 16:16:49.665684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.732 [2024-07-15 16:16:49.665716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.732 [2024-07-15 16:16:49.665738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.732 [2024-07-15 16:16:49.673378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.732 [2024-07-15 16:16:49.673427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.732 [2024-07-15 16:16:49.673444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.732 [2024-07-15 16:16:49.681229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.732 [2024-07-15 16:16:49.681263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.732 [2024-07-15 16:16:49.681282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.732 [2024-07-15 16:16:49.688793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.732 [2024-07-15 16:16:49.688826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.732 [2024-07-15 16:16:49.688843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.732 [2024-07-15 16:16:49.697020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.732 [2024-07-15 16:16:49.697054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.732 [2024-07-15 16:16:49.697072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.732 [2024-07-15 16:16:49.705392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.732 [2024-07-15 16:16:49.705424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.732 [2024-07-15 16:16:49.705442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.732 [2024-07-15 16:16:49.712866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.732 [2024-07-15 16:16:49.712900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.732 [2024-07-15 16:16:49.712918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.732 [2024-07-15 16:16:49.719813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.732 [2024-07-15 16:16:49.719846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.732 [2024-07-15 16:16:49.719863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.732 [2024-07-15 16:16:49.726364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.732 [2024-07-15 16:16:49.726396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.732 [2024-07-15 16:16:49.726413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.732 [2024-07-15 16:16:49.732619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.732 [2024-07-15 16:16:49.732657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.732 [2024-07-15 16:16:49.732676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.993 [2024-07-15 16:16:49.738553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.993 [2024-07-15 16:16:49.738587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.993 [2024-07-15 16:16:49.738604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.993 [2024-07-15 16:16:49.744309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.993 [2024-07-15 16:16:49.744341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.993 [2024-07-15 16:16:49.744359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.993 [2024-07-15 16:16:49.750000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.993 [2024-07-15 16:16:49.750033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.993 [2024-07-15 16:16:49.750051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.993 [2024-07-15 16:16:49.755891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.993 [2024-07-15 16:16:49.755924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.993 [2024-07-15 16:16:49.755966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.993 [2024-07-15 16:16:49.761646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.993 [2024-07-15 16:16:49.761693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.993 [2024-07-15 16:16:49.761711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.993 [2024-07-15 16:16:49.767690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.993 [2024-07-15 16:16:49.767722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.993 [2024-07-15 16:16:49.767739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.993 [2024-07-15 16:16:49.773491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.993 [2024-07-15 16:16:49.773525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.993 [2024-07-15 16:16:49.773543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.993 [2024-07-15 16:16:49.779160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.993 [2024-07-15 16:16:49.779193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.993 [2024-07-15 16:16:49.779212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.993 [2024-07-15 16:16:49.784845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.993 [2024-07-15 16:16:49.784878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.784897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.791354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.791386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.791405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.797651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.797685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.797703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.804981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.805014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.805033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.812568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.812603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.812621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.820472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.820503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.820520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.828285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.828317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.828349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.836126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.836160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.836178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.843683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.843714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.843753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.849476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.849509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.849526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.853138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.853170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.853189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.859006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.859037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.859055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.864513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.864547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.864581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.870199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.870232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.870250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.876046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.876102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.876119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.881834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.881865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.881882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.887720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.887752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.887784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.893521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.893558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.893575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.899431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.899465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.899482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.905236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.905284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.905302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.910849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.910882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.910900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.916537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.916571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.916589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.922374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.922407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.922439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.928294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.928328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.928361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.933939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.933981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.934000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.939868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.939913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.939930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.945606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.945637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.945668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.951368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.951413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.951430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.957523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.957568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.957584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.963224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.963271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.963289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.969045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.969077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.969094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.974649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.974680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.974696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.980428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.980458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.980474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.985766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.985796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.985812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.994 [2024-07-15 16:16:49.991393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:03.994 [2024-07-15 16:16:49.991438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.994 [2024-07-15 16:16:49.991460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.255 [2024-07-15 16:16:49.997207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:04.255 [2024-07-15 16:16:49.997242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.255 [2024-07-15 16:16:49.997274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:04.255 [2024-07-15 16:16:50.003079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19654f0) 00:24:04.255 [2024-07-15 16:16:50.003114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.255 [2024-07-15 16:16:50.003132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:04.255 00:24:04.255 Latency(us) 00:24:04.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.255 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:04.255 nvme0n1 : 2.00 5301.34 662.67 0.00 0.00 3013.48 807.06 8495.41 00:24:04.255 =================================================================================================================== 00:24:04.255 Total : 5301.34 662.67 0.00 0.00 3013.48 807.06 8495.41 00:24:04.255 0 00:24:04.255 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:04.255 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:04.255 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:04.255 | .driver_specific 00:24:04.255 | .nvme_error 00:24:04.255 | .status_code 00:24:04.255 | .command_transient_transport_error' 00:24:04.255 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:04.515 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 342 > 0 )) 00:24:04.515 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 880891 00:24:04.515 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 880891 ']' 00:24:04.515 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 880891 00:24:04.515 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:04.515 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:04.515 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 880891 00:24:04.515 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:04.515 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:04.516 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 880891' 00:24:04.516 killing process with pid 880891 00:24:04.516 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 880891 00:24:04.516 Received shutdown signal, test time was about 2.000000 seconds 00:24:04.516 00:24:04.516 Latency(us) 00:24:04.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.516 =================================================================================================================== 00:24:04.516 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.516 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 880891 00:24:04.774 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:04.774 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:04.774 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:04.774 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:04.774 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:04.774 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=881303 00:24:04.774 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:04.774 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 881303 /var/tmp/bperf.sock 00:24:04.774 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 881303 ']' 00:24:04.774 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:04.774 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:04.774 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:04.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:04.774 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:04.774 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:04.774 [2024-07-15 16:16:50.620394] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:24:04.774 [2024-07-15 16:16:50.620480] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881303 ] 00:24:04.774 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.774 [2024-07-15 16:16:50.678108] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.034 [2024-07-15 16:16:50.783437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.034 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.034 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:05.034 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:05.034 16:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:05.291 16:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:05.291 16:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.291 16:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:05.291 16:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.291 16:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:05.291 16:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:05.857 nvme0n1 00:24:05.857 16:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:05.857 16:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.857 16:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:05.857 16:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.857 16:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:05.857 16:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:05.857 Running I/O for 2 seconds... 00:24:05.857 [2024-07-15 16:16:51.746753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f6458 00:24:05.857 [2024-07-15 16:16:51.747820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.857 [2024-07-15 16:16:51.747858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.857 [2024-07-15 16:16:51.759426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e5ec8 00:24:05.857 [2024-07-15 16:16:51.760395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.857 [2024-07-15 16:16:51.760426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.857 [2024-07-15 16:16:51.770784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f6cc8 00:24:05.857 [2024-07-15 16:16:51.772099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.857 [2024-07-15 16:16:51.772130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.857 [2024-07-15 16:16:51.782564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190ef6a8 00:24:05.857 [2024-07-15 16:16:51.783648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.857 [2024-07-15 16:16:51.783678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.857 [2024-07-15 16:16:51.794652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190ddc00 00:24:05.857 [2024-07-15 16:16:51.795618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.857 [2024-07-15 16:16:51.795652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.857 [2024-07-15 16:16:51.805729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e5a90 00:24:05.857 [2024-07-15 16:16:51.806514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.857 [2024-07-15 16:16:51.806548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.857 [2024-07-15 16:16:51.819628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e95a0 00:24:05.857 [2024-07-15 16:16:51.821375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.857 [2024-07-15 16:16:51.821418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.857 [2024-07-15 16:16:51.827947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190fd208 00:24:05.857 [2024-07-15 16:16:51.828675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.857 [2024-07-15 16:16:51.828723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.857 [2024-07-15 16:16:51.843549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f0ff8 00:24:05.857 [2024-07-15 16:16:51.845406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.857 [2024-07-15 16:16:51.845435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.857 [2024-07-15 16:16:51.851743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e3498 00:24:05.857 [2024-07-15 16:16:51.852709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.857 [2024-07-15 16:16:51.852752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:06.115 [2024-07-15 16:16:51.864287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e9168 00:24:06.115 [2024-07-15 16:16:51.865505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.115 [2024-07-15 16:16:51.865534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:06.115 [2024-07-15 16:16:51.876046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f20d8 00:24:06.115 [2024-07-15 16:16:51.876785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.115 [2024-07-15 16:16:51.876828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:06.115 [2024-07-15 16:16:51.887115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f20d8 00:24:06.115 [2024-07-15 16:16:51.887780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.115 [2024-07-15 16:16:51.887809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:06.115 [2024-07-15 16:16:51.898883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f0350 00:24:06.115 [2024-07-15 16:16:51.899885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.115 [2024-07-15 16:16:51.899928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:06.115 [2024-07-15 16:16:51.910413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190df118 00:24:06.115 [2024-07-15 16:16:51.911086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.115 [2024-07-15 16:16:51.911120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:06.115 [2024-07-15 16:16:51.921919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e27f0 00:24:06.115 [2024-07-15 16:16:51.922980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.116 [2024-07-15 16:16:51.923023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:06.116 [2024-07-15 16:16:51.934836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f0bc0 00:24:06.116 [2024-07-15 16:16:51.936132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.116 [2024-07-15 16:16:51.936165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:06.116 [2024-07-15 16:16:51.946890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190fb480 00:24:06.116 [2024-07-15 16:16:51.948396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.116 [2024-07-15 16:16:51.948424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:06.116 [2024-07-15 16:16:51.958938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f9f68 00:24:06.116 [2024-07-15 16:16:51.960638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.116 [2024-07-15 16:16:51.960670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:06.116 [2024-07-15 16:16:51.967289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190ea680 00:24:06.116 [2024-07-15 16:16:51.968056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.116 [2024-07-15 16:16:51.968084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:06.116 [2024-07-15 16:16:51.981518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190ddc00 00:24:06.116 [2024-07-15 16:16:51.982900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.116 [2024-07-15 16:16:51.982929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:06.116 [2024-07-15 16:16:51.993224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e1b48 00:24:06.116 [2024-07-15 16:16:51.994218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.116 [2024-07-15 16:16:51.994252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:06.116 [2024-07-15 16:16:52.004349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190eb760 00:24:06.116 [2024-07-15 16:16:52.005779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.116 [2024-07-15 16:16:52.005808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:06.116 [2024-07-15 16:16:52.016089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190dfdc0 00:24:06.116 [2024-07-15 16:16:52.017259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.116 [2024-07-15 16:16:52.017293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:06.116 [2024-07-15 16:16:52.028162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f92c0 00:24:06.116 [2024-07-15 16:16:52.029211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.116 [2024-07-15 16:16:52.029246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:06.116 [2024-07-15 16:16:52.039857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f81e0 00:24:06.116 [2024-07-15 16:16:52.041320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.116 [2024-07-15 16:16:52.041350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:06.116 [2024-07-15 16:16:52.051650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190df550 00:24:06.116 [2024-07-15 16:16:52.052686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.116 [2024-07-15 16:16:52.052716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:06.116 [2024-07-15 16:16:52.062124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f9f68 00:24:06.116 [2024-07-15 16:16:52.063393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.116 [2024-07-15 16:16:52.063423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:06.116 [2024-07-15 16:16:52.073633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190df988 00:24:06.116 [2024-07-15 16:16:52.074635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.116 [2024-07-15 16:16:52.074663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:06.116 [2024-07-15 16:16:52.085162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190df550 00:24:06.116 [2024-07-15 16:16:52.086266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.116 [2024-07-15 16:16:52.086295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:06.116 [2024-07-15 16:16:52.099596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190fb048 00:24:06.116 [2024-07-15 16:16:52.101389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.116 [2024-07-15 16:16:52.101417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:06.116 [2024-07-15 16:16:52.108013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e8d30 00:24:06.116 [2024-07-15 16:16:52.108839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.116 [2024-07-15 16:16:52.108866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:06.376 [2024-07-15 16:16:52.122383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e84c0 00:24:06.376 [2024-07-15 16:16:52.123760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.376 [2024-07-15 16:16:52.123789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:06.376 [2024-07-15 16:16:52.133930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190edd58 00:24:06.376 [2024-07-15 16:16:52.135312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.376 [2024-07-15 16:16:52.135359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:06.376 [2024-07-15 16:16:52.144894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f5378 00:24:06.376 [2024-07-15 16:16:52.145943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.376 [2024-07-15 16:16:52.146003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:06.376 [2024-07-15 16:16:52.156707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e6300 00:24:06.376 [2024-07-15 16:16:52.157715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.376 [2024-07-15 16:16:52.157759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:06.376 [2024-07-15 16:16:52.168855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e7c50 00:24:06.376 [2024-07-15 16:16:52.170058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.376 [2024-07-15 16:16:52.170088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:06.376 [2024-07-15 16:16:52.179824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f4f40 00:24:06.376 [2024-07-15 16:16:52.180764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.376 [2024-07-15 16:16:52.180808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:06.376 [2024-07-15 16:16:52.191275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e4de8 00:24:06.376 [2024-07-15 16:16:52.192381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.376 [2024-07-15 16:16:52.192424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:06.376 [2024-07-15 16:16:52.203338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f6020 00:24:06.376 [2024-07-15 16:16:52.204553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.376 [2024-07-15 16:16:52.204582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:06.376 [2024-07-15 16:16:52.215117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.376 [2024-07-15 16:16:52.216347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.376 [2024-07-15 16:16:52.216375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:06.376 [2024-07-15 16:16:52.226098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e8d30 00:24:06.376 [2024-07-15 16:16:52.227204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.376 [2024-07-15 16:16:52.227247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:06.376 [2024-07-15 16:16:52.237188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190fd208 00:24:06.376 [2024-07-15 16:16:52.237860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.376 [2024-07-15 16:16:52.237908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:06.376 [2024-07-15 16:16:52.249076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190df118 00:24:06.377 [2024-07-15 16:16:52.250062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.377 [2024-07-15 16:16:52.250091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:06.377 [2024-07-15 16:16:52.262347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190df118 00:24:06.377 [2024-07-15 16:16:52.263873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.377 [2024-07-15 16:16:52.263902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:06.377 [2024-07-15 16:16:52.274391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f20d8 00:24:06.377 [2024-07-15 16:16:52.275883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.377 [2024-07-15 16:16:52.275925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:06.377 [2024-07-15 16:16:52.284840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190fb048 00:24:06.377 [2024-07-15 16:16:52.286053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.377 [2024-07-15 16:16:52.286083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:06.377 [2024-07-15 16:16:52.296749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190eb328 00:24:06.377 [2024-07-15 16:16:52.297755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.377 [2024-07-15 16:16:52.297802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:06.377 [2024-07-15 16:16:52.307623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190df988 00:24:06.377 [2024-07-15 16:16:52.308936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.377 [2024-07-15 16:16:52.308972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:06.377 [2024-07-15 16:16:52.319153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190eb760 00:24:06.377 [2024-07-15 16:16:52.320068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.377 [2024-07-15 16:16:52.320097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:06.377 [2024-07-15 16:16:52.330638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190eb328 00:24:06.377 [2024-07-15 16:16:52.331694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.377 [2024-07-15 16:16:52.331722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:06.377 [2024-07-15 16:16:52.342324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190fcdd0 00:24:06.377 [2024-07-15 16:16:52.343015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.377 [2024-07-15 16:16:52.343049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:06.377 [2024-07-15 16:16:52.356322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e95a0 00:24:06.377 [2024-07-15 16:16:52.357910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.377 [2024-07-15 16:16:52.357953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:06.377 [2024-07-15 16:16:52.365443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190fd208 00:24:06.377 [2024-07-15 16:16:52.366565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.377 [2024-07-15 16:16:52.366593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:06.377 [2024-07-15 16:16:52.377403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.377 [2024-07-15 16:16:52.378615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.377 [2024-07-15 16:16:52.378644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:06.660 [2024-07-15 16:16:52.392001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190ff3c8 00:24:06.660 [2024-07-15 16:16:52.393884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.660 [2024-07-15 16:16:52.393927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:06.660 [2024-07-15 16:16:52.400311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190eaef0 00:24:06.660 [2024-07-15 16:16:52.401108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.660 [2024-07-15 16:16:52.401137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:06.660 [2024-07-15 16:16:52.411719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190eaab8 00:24:06.660 [2024-07-15 16:16:52.412773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.660 [2024-07-15 16:16:52.412817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:06.660 [2024-07-15 16:16:52.423754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e5220 00:24:06.660 [2024-07-15 16:16:52.424858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.660 [2024-07-15 16:16:52.424888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:06.660 [2024-07-15 16:16:52.435385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190ecc78 00:24:06.660 [2024-07-15 16:16:52.436075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.660 [2024-07-15 16:16:52.436129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:06.660 [2024-07-15 16:16:52.448740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190eee38 00:24:06.661 [2024-07-15 16:16:52.450176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.450205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.459710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f8618 00:24:06.661 [2024-07-15 16:16:52.460974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.461004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.473554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e0630 00:24:06.661 [2024-07-15 16:16:52.475418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.475462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.481822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190fa3a0 00:24:06.661 [2024-07-15 16:16:52.482760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.482789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.496653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f9f68 00:24:06.661 [2024-07-15 16:16:52.498513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.498556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.504907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190eff18 00:24:06.661 [2024-07-15 16:16:52.505819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.505847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.516783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190dfdc0 00:24:06.661 [2024-07-15 16:16:52.517929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.517967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.528729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.661 [2024-07-15 16:16:52.529829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.529873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.540132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e7818 00:24:06.661 [2024-07-15 16:16:52.540848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.540882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.554604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190eaab8 00:24:06.661 [2024-07-15 16:16:52.556433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.556477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.562813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190feb58 00:24:06.661 [2024-07-15 16:16:52.563704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.563732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.574029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f2510 00:24:06.661 [2024-07-15 16:16:52.574820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.574850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.588216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f7538 00:24:06.661 [2024-07-15 16:16:52.589686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.589732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.599133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190f92c0 00:24:06.661 [2024-07-15 16:16:52.600147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.600191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.609897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190fc560 00:24:06.661 [2024-07-15 16:16:52.611023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.611052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.621513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e3d08 00:24:06.661 [2024-07-15 16:16:52.622438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.622467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.632913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e0630 00:24:06.661 [2024-07-15 16:16:52.633984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.634015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.644506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190fb048 00:24:06.661 [2024-07-15 16:16:52.645163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.645193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:06.661 [2024-07-15 16:16:52.658263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190fb480 00:24:06.661 [2024-07-15 16:16:52.659867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.661 [2024-07-15 16:16:52.659896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.670812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e3060 00:24:06.922 [2024-07-15 16:16:52.672425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.672453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.682891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e1f80 00:24:06.922 [2024-07-15 16:16:52.684675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.684719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.693339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.693593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.693622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.707013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.707247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.707291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.720822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.721160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.721189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.734581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.734818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.734846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.748418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.748642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.748673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.762060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.762316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.762343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.775985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.776235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.776277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.789694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.789962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.790005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.803087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.803335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.803377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.816429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.816667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.816694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.829872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.830112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.830143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.843364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.843602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.843630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.856822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.857184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.857214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.870285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.870551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.870580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.883740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.884090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.884119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.897292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.897605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.897634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.910770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.911062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.911091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.922 [2024-07-15 16:16:52.924340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:06.922 [2024-07-15 16:16:52.924658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.922 [2024-07-15 16:16:52.924686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:52.937839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:52.938182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:52.938211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:52.951365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:52.951611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:52.951638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:52.964809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:52.965088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:52.965117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:52.978229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:52.978518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:52.978546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:52.991787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:52.992055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:52.992083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:53.005240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:53.005493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:53.005520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:53.018689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:53.018925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:53.018953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:53.032214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:53.032474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:53.032500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:53.045802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:53.046075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:53.046104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:53.059274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:53.059528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:53.059554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:53.072709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:53.072946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:53.072996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:53.086161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:53.086411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:53.086445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:53.099644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:53.099883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:53.099921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:53.113099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:53.113386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:53.113414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:53.126565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:53.126801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:53.126829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:53.140094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:53.140396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:53.140424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:53.153792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:53.154061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:53.154090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:53.167299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:53.167539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:53.167566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.183 [2024-07-15 16:16:53.180752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.183 [2024-07-15 16:16:53.181066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.183 [2024-07-15 16:16:53.181095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.194155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.194497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.194540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.207767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.208055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.208088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.221211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.221472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.221499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.234666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.234905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.234933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.248159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.248400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.248428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.261528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.261771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.261798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.275109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.275368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.275396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.288634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.288859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.288899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.302232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.302486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.302513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.315699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.315935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.315984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.329156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.329416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.329443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.342710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.342950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.343009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.356330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.356576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.356603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.369756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.370040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.370068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.383163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.383420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.383447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.396526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.396850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.396877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.410103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.410442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.410470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.442 [2024-07-15 16:16:53.423504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.442 [2024-07-15 16:16:53.423743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.442 [2024-07-15 16:16:53.423771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.443 [2024-07-15 16:16:53.436965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.443 [2024-07-15 16:16:53.437275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.443 [2024-07-15 16:16:53.437317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.450350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.450590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.450625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.463722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.464051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.464079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.477241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.477502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.477530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.490728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.491076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.491105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.504207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.504459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.504487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.517636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.517907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.517950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.531116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.531371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.531399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.544509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.544746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.544772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.558027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.558275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.558318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.571506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.571743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.571776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.584985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.585231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.585260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.598542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.598779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.598808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.611995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.612240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.612284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.625462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.625702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.625730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.638855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.639125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.639154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.652220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.652472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.652500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.665633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.665871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.665900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.679087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.679332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.679375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.692600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.692841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.692869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.704 [2024-07-15 16:16:53.706041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.704 [2024-07-15 16:16:53.706367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.704 [2024-07-15 16:16:53.706411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.991 [2024-07-15 16:16:53.718640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.991 [2024-07-15 16:16:53.718858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.991 [2024-07-15 16:16:53.718887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.991 [2024-07-15 16:16:53.731397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16e16b0) with pdu=0x2000190e2c28 00:24:07.991 [2024-07-15 16:16:53.731626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.991 [2024-07-15 16:16:53.731668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.991 00:24:07.991 Latency(us) 00:24:07.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.991 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:07.991 nvme0n1 : 2.01 20320.53 79.38 0.00 0.00 6284.42 2767.08 15534.46 00:24:07.991 =================================================================================================================== 00:24:07.991 Total : 20320.53 79.38 0.00 0.00 6284.42 2767.08 15534.46 00:24:07.991 0 00:24:07.991 16:16:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:07.991 16:16:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:07.991 16:16:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:07.991 | .driver_specific 00:24:07.991 | .nvme_error 00:24:07.991 | .status_code 00:24:07.991 | .command_transient_transport_error' 00:24:07.991 16:16:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:08.251 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:24:08.251 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 881303 00:24:08.251 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 881303 ']' 00:24:08.251 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 881303 00:24:08.251 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:08.251 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:08.251 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 881303 00:24:08.251 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:08.251 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:08.251 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 881303' 00:24:08.251 killing process with pid 881303 00:24:08.251 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 881303 00:24:08.251 Received shutdown signal, test time was about 2.000000 seconds 00:24:08.251 00:24:08.251 Latency(us) 00:24:08.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.251 =================================================================================================================== 00:24:08.251 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:08.251 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 881303 00:24:08.511 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:08.511 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:08.511 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:08.511 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:08.511 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:08.511 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=881824 00:24:08.511 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:08.511 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 881824 /var/tmp/bperf.sock 00:24:08.511 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 881824 ']' 00:24:08.511 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:08.511 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.511 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:08.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:08.511 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.511 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.511 [2024-07-15 16:16:54.342518] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:24:08.511 [2024-07-15 16:16:54.342604] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881824 ] 00:24:08.511 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:08.511 Zero copy mechanism will not be used. 00:24:08.511 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.511 [2024-07-15 16:16:54.400698] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.511 [2024-07-15 16:16:54.506286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.769 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.769 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:08.769 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:08.769 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:09.027 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:09.027 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.027 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:09.027 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.027 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.027 16:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.285 nvme0n1 00:24:09.285 16:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:09.285 16:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.285 16:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:09.285 16:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.285 16:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:09.285 16:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:09.285 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:09.285 Zero copy mechanism will not be used. 00:24:09.285 Running I/O for 2 seconds... 00:24:09.545 [2024-07-15 16:16:55.291978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.545 [2024-07-15 16:16:55.292297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.545 [2024-07-15 16:16:55.292335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.545 [2024-07-15 16:16:55.297553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.545 [2024-07-15 16:16:55.297873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.545 [2024-07-15 16:16:55.297904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.545 [2024-07-15 16:16:55.303625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.545 [2024-07-15 16:16:55.303973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.545 [2024-07-15 16:16:55.304005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.545 [2024-07-15 16:16:55.309693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.545 [2024-07-15 16:16:55.310027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.545 [2024-07-15 16:16:55.310058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.545 [2024-07-15 16:16:55.315647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.545 [2024-07-15 16:16:55.315963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.545 [2024-07-15 16:16:55.315994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.545 [2024-07-15 16:16:55.321382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.545 [2024-07-15 16:16:55.321682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.545 [2024-07-15 16:16:55.321720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.545 [2024-07-15 16:16:55.326666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.545 [2024-07-15 16:16:55.327004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.545 [2024-07-15 16:16:55.327034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.545 [2024-07-15 16:16:55.331775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.545 [2024-07-15 16:16:55.332120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.545 [2024-07-15 16:16:55.332151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.545 [2024-07-15 16:16:55.336909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.545 [2024-07-15 16:16:55.337243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.545 [2024-07-15 16:16:55.337284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.545 [2024-07-15 16:16:55.342072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.545 [2024-07-15 16:16:55.342353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.545 [2024-07-15 16:16:55.342382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.545 [2024-07-15 16:16:55.347106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.545 [2024-07-15 16:16:55.347399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.545 [2024-07-15 16:16:55.347428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.545 [2024-07-15 16:16:55.352809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.545 [2024-07-15 16:16:55.353176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.545 [2024-07-15 16:16:55.353205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.358738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.359156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.359186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.364677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.365009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.365040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.370380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.370461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.370488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.376217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.376297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.376324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.382391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.382683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.382711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.388264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.388545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.388574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.394053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.394381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.394409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.399789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.400103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.400133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.405503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.405860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.405887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.411558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.412050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.412079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.417251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.417544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.417573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.422297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.422588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.422617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.427340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.427647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.427689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.432471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.432746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.432775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.437526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.437814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.437843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.442566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.442857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.442886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.447505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.447799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.447828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.453157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.453472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.453500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.458783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.459130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.459160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.463827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.464189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.464237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.469136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.469468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.469497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.474201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.474511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.474539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.479334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.479611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.479639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.484676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.546 [2024-07-15 16:16:55.484985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.546 [2024-07-15 16:16:55.485015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.546 [2024-07-15 16:16:55.490968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.547 [2024-07-15 16:16:55.491301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.547 [2024-07-15 16:16:55.491340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.547 [2024-07-15 16:16:55.497116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.547 [2024-07-15 16:16:55.497409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.547 [2024-07-15 16:16:55.497438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.547 [2024-07-15 16:16:55.504042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.547 [2024-07-15 16:16:55.504382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.547 [2024-07-15 16:16:55.504410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.547 [2024-07-15 16:16:55.510594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.547 [2024-07-15 16:16:55.510917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.547 [2024-07-15 16:16:55.510966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.547 [2024-07-15 16:16:55.516080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.547 [2024-07-15 16:16:55.516475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.547 [2024-07-15 16:16:55.516512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.547 [2024-07-15 16:16:55.521272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.547 [2024-07-15 16:16:55.521605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.547 [2024-07-15 16:16:55.521633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.547 [2024-07-15 16:16:55.526982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.547 [2024-07-15 16:16:55.527364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.547 [2024-07-15 16:16:55.527402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.547 [2024-07-15 16:16:55.533268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.547 [2024-07-15 16:16:55.533591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.547 [2024-07-15 16:16:55.533619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.547 [2024-07-15 16:16:55.539040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.547 [2024-07-15 16:16:55.539423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.547 [2024-07-15 16:16:55.539450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.547 [2024-07-15 16:16:55.544683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.547 [2024-07-15 16:16:55.545027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.547 [2024-07-15 16:16:55.545071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.808 [2024-07-15 16:16:55.551078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.808 [2024-07-15 16:16:55.551387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.808 [2024-07-15 16:16:55.551415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.808 [2024-07-15 16:16:55.557279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.808 [2024-07-15 16:16:55.557632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.808 [2024-07-15 16:16:55.557661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.808 [2024-07-15 16:16:55.562696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.808 [2024-07-15 16:16:55.563042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.808 [2024-07-15 16:16:55.563076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.808 [2024-07-15 16:16:55.567752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.808 [2024-07-15 16:16:55.568075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.808 [2024-07-15 16:16:55.568104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.808 [2024-07-15 16:16:55.573326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.808 [2024-07-15 16:16:55.573768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.808 [2024-07-15 16:16:55.573807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.808 [2024-07-15 16:16:55.579798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.808 [2024-07-15 16:16:55.580086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.808 [2024-07-15 16:16:55.580116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.808 [2024-07-15 16:16:55.587240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.808 [2024-07-15 16:16:55.587586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.808 [2024-07-15 16:16:55.587615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.808 [2024-07-15 16:16:55.594532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.808 [2024-07-15 16:16:55.594841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.808 [2024-07-15 16:16:55.594869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.808 [2024-07-15 16:16:55.600711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.808 [2024-07-15 16:16:55.601029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.808 [2024-07-15 16:16:55.601059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.808 [2024-07-15 16:16:55.606481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.808 [2024-07-15 16:16:55.606781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.808 [2024-07-15 16:16:55.606809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.611658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.611970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.612000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.616572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.616995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.617025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.621697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.622041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.622071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.626675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.626991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.627020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.631984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.632298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.632351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.638269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.638653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.638681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.643658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.644008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.644039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.648987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.649285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.649314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.654621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.655028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.655080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.660054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.660454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.660483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.665163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.665455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.665484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.670049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.670350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.670380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.675279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.675569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.675599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.680412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.680704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.680733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.685847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.686166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.686196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.691378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.691668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.691698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.696348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.696630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.696659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.701450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.701740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.701769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.706550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.706932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.706988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.711736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.712084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.712115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.716873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.717183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.717214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.722072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.722373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.722416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.728029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.728406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.728435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.733189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.733481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.733511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.738116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.738416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.738446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.743284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.743578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.743607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.748401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.748728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.748757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.754088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.754473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.754516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.809 [2024-07-15 16:16:55.759798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.809 [2024-07-15 16:16:55.760132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.809 [2024-07-15 16:16:55.760162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.810 [2024-07-15 16:16:55.764791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.810 [2024-07-15 16:16:55.765124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.810 [2024-07-15 16:16:55.765154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.810 [2024-07-15 16:16:55.769845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.810 [2024-07-15 16:16:55.770142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.810 [2024-07-15 16:16:55.770172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.810 [2024-07-15 16:16:55.775180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.810 [2024-07-15 16:16:55.775295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.810 [2024-07-15 16:16:55.775323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.810 [2024-07-15 16:16:55.782063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.810 [2024-07-15 16:16:55.782390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.810 [2024-07-15 16:16:55.782419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:09.810 [2024-07-15 16:16:55.788466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.810 [2024-07-15 16:16:55.788745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.810 [2024-07-15 16:16:55.788774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:09.810 [2024-07-15 16:16:55.794703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.810 [2024-07-15 16:16:55.795033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.810 [2024-07-15 16:16:55.795063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:09.810 [2024-07-15 16:16:55.800166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.810 [2024-07-15 16:16:55.800451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.810 [2024-07-15 16:16:55.800481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.810 [2024-07-15 16:16:55.806648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:09.810 [2024-07-15 16:16:55.806944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.810 [2024-07-15 16:16:55.806998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.071 [2024-07-15 16:16:55.813021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.071 [2024-07-15 16:16:55.813322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.071 [2024-07-15 16:16:55.813352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.071 [2024-07-15 16:16:55.819435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.071 [2024-07-15 16:16:55.819842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.071 [2024-07-15 16:16:55.819871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.071 [2024-07-15 16:16:55.826447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.071 [2024-07-15 16:16:55.826709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.071 [2024-07-15 16:16:55.826738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.071 [2024-07-15 16:16:55.831811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.071 [2024-07-15 16:16:55.832121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.071 [2024-07-15 16:16:55.832152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.071 [2024-07-15 16:16:55.837052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.071 [2024-07-15 16:16:55.837324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.071 [2024-07-15 16:16:55.837368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.071 [2024-07-15 16:16:55.842473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.071 [2024-07-15 16:16:55.842724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.071 [2024-07-15 16:16:55.842753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.847393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.847659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.847688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.852224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.852501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.852537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.857742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.858084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.858114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.863678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.863927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.863979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.869790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.870113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.870143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.876490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.876790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.876818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.881794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.882077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.882107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.886715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.886984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.887015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.892022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.892291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.892320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.896818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.897091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.897121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.901667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.901918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.901970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.907052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.907352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.907381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.913181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.913470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.913499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.919135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.919500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.919530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.925331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.925602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.925635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.931374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.931644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.931687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.937408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.937701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.937731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.943468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.943734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.943763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.949922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.950222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.950251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.956456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.956710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.956753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.963291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.963621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.963665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.970157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.970458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.970492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.977281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.977547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.977576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.983495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.983759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.983787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.988756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.989028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.989059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.994245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:55.994494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:55.994538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:55.999846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:56.000120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:56.000150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:56.005053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:56.005345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:56.005373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.072 [2024-07-15 16:16:56.010522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.072 [2024-07-15 16:16:56.010782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.072 [2024-07-15 16:16:56.010811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.073 [2024-07-15 16:16:56.016023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.073 [2024-07-15 16:16:56.016293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.073 [2024-07-15 16:16:56.016337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.073 [2024-07-15 16:16:56.021379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.073 [2024-07-15 16:16:56.021646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.073 [2024-07-15 16:16:56.021689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.073 [2024-07-15 16:16:56.027015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.073 [2024-07-15 16:16:56.027283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.073 [2024-07-15 16:16:56.027328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.073 [2024-07-15 16:16:56.032651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.073 [2024-07-15 16:16:56.032921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.073 [2024-07-15 16:16:56.032953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.073 [2024-07-15 16:16:56.038329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.073 [2024-07-15 16:16:56.038580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.073 [2024-07-15 16:16:56.038610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.073 [2024-07-15 16:16:56.043511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.073 [2024-07-15 16:16:56.043778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.073 [2024-07-15 16:16:56.043806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.073 [2024-07-15 16:16:56.049524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.073 [2024-07-15 16:16:56.049768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.073 [2024-07-15 16:16:56.049796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.073 [2024-07-15 16:16:56.054645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.073 [2024-07-15 16:16:56.054891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.073 [2024-07-15 16:16:56.054919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.073 [2024-07-15 16:16:56.059327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.073 [2024-07-15 16:16:56.059593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.073 [2024-07-15 16:16:56.059623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.073 [2024-07-15 16:16:56.064035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.073 [2024-07-15 16:16:56.064306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.073 [2024-07-15 16:16:56.064335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.073 [2024-07-15 16:16:56.068753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.073 [2024-07-15 16:16:56.069043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.073 [2024-07-15 16:16:56.069074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.073 [2024-07-15 16:16:56.073489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.073 [2024-07-15 16:16:56.073766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.073 [2024-07-15 16:16:56.073795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.334 [2024-07-15 16:16:56.078583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.334 [2024-07-15 16:16:56.078866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.334 [2024-07-15 16:16:56.078895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.334 [2024-07-15 16:16:56.084272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.334 [2024-07-15 16:16:56.084530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.334 [2024-07-15 16:16:56.084559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.334 [2024-07-15 16:16:56.089671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.334 [2024-07-15 16:16:56.089928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.334 [2024-07-15 16:16:56.089981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.334 [2024-07-15 16:16:56.094953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.334 [2024-07-15 16:16:56.095216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.334 [2024-07-15 16:16:56.095269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.334 [2024-07-15 16:16:56.100542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.334 [2024-07-15 16:16:56.100802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.334 [2024-07-15 16:16:56.100831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.334 [2024-07-15 16:16:56.106050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.334 [2024-07-15 16:16:56.106333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.334 [2024-07-15 16:16:56.106360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.334 [2024-07-15 16:16:56.111440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.334 [2024-07-15 16:16:56.111712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.334 [2024-07-15 16:16:56.111741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.334 [2024-07-15 16:16:56.117069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.117336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.117366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.122477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.122735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.122764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.127984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.128237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.128281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.133343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.133605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.133637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.139299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.139546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.139575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.145748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.146060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.146091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.152302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.152581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.152610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.158550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.158842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.158871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.164238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.164517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.164546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.169799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.170072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.170103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.174527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.174807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.174836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.179357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.179623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.179653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.184006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.184276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.184305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.188631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.188895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.188925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.193282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.193548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.193577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.198119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.198396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.198424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.203559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.203809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.203852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.208424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.208704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.208733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.213119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.213400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.213428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.217941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.218221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.218252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.222618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.222879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.222922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.227359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.227619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.227647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.232085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.232352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.232390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.236809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.237116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.237146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.241666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.241926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.241978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.246303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.246566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.246595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.251061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.251336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.251365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.255860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.256139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.256169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.260795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.261083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.261113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.265727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.266018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.266048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.270497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.270739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.270769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.275182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.275462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.275494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.280108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.280387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.280415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.284892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.285172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.285202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.289645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.290023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.290054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.294513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.294763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.294790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.300549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.300822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.300851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.306074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.306346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.306375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.312190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.312554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.312584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.318647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.318907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.318951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.323652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.323898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.323928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.328408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.328658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.328686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.335 [2024-07-15 16:16:56.333222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.335 [2024-07-15 16:16:56.333500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.335 [2024-07-15 16:16:56.333528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.597 [2024-07-15 16:16:56.338762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.597 [2024-07-15 16:16:56.339039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.339070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.343626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.343883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.343911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.348351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.348614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.348657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.353117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.353373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.353418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.357808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.358088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.358118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.362609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.362865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.362898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.367503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.367762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.367789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.373067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.373363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.373391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.379315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.379595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.379624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.385494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.385781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.385809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.392050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.392390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.392419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.398802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.399078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.399108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.404768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.405038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.405068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.409696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.409968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.410008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.414476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.414733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.414761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.419127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.419394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.419423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.423893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.424152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.424182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.428709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.428981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.429024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.433378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.433699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.433727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.438177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.438442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.438471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.442730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.443081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.443110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.447713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.448009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.448043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.452562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.452811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.452844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.457519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.457773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.457801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.462211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.462483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.462510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.467035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.467305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.467333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.471792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.472064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.598 [2024-07-15 16:16:56.472094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.598 [2024-07-15 16:16:56.476510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.598 [2024-07-15 16:16:56.476800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.476828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.481245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.481628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.481655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.486203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.486479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.486506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.490915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.491206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.491250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.495831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.496115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.496145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.500589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.500890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.500920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.505500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.505762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.505791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.510257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.510526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.510558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.515029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.515318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.515346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.519798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.520155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.520185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.524880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.525163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.525193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.529751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.530019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.530053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.534573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.534839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.534868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.540138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.540408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.540438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.545146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.545429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.545457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.550075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.550353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.550383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.555002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.555270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.555314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.560110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.560413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.560441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.564985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.565255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.565299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.569779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.570066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.570099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.574443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.574707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.574737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.579144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.579424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.579458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.583783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.584053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.584083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.588432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.588705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.588734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.593423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.593732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.593773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.599 [2024-07-15 16:16:56.598334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.599 [2024-07-15 16:16:56.598601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-07-15 16:16:56.598631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.860 [2024-07-15 16:16:56.603008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.603279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.603308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.607688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.607977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.608008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.612458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.612704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.612734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.617206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.617468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.617497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.622680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.622964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.622995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.628035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.628344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.628372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.632738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.633010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.633040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.637603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.637882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.637912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.642354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.642629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.642658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.647210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.647485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.647514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.652092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.652375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.652403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.656924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.657199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.657228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.661581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.661846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.661875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.666225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.666487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.666517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.670834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.671109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.671137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.675730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.676030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.676060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.680409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.680661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.680691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.684911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.685183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.685213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.689862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.690125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.690159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.694739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.695031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.695060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.699402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.699671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.699700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.704299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.704566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.704601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.708896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.709153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.709183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.713499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.713767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.713803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.718194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.718462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.718496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.722837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.723096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.723126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.727512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.727793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.727823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.732213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.732475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.732504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.861 [2024-07-15 16:16:56.736894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.861 [2024-07-15 16:16:56.737212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.861 [2024-07-15 16:16:56.737242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.741699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.741961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.742000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.746256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.746551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.746580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.750975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.751245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.751273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.755685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.755963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.756006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.760406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.760660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.760690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.765384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.765638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.765667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.771245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.771666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.771695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.777871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.778238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.778270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.784870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.785133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.785177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.790771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.791039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.791076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.796185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.796453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.796484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.802083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.802348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.802378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.807570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.807824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.807853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.812672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.812971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.813009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.817366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.817627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.817658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.821979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.822243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.822292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.826709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.827004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.827033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.831477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.831768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.831798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.836153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.836445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.836475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.840721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.841011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.841042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.845393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.845644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.845674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.850076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.850332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.850363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.854670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.854926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.854964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.862 [2024-07-15 16:16:56.859356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:10.862 [2024-07-15 16:16:56.859607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.862 [2024-07-15 16:16:56.859638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.863990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.864244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.864274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.868853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.869114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.869144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.873350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.873602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.873631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.877970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.878256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.878287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.882676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.882939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.882995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.888052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.888320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.888349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.893105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.893375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.893405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.897743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.898030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.898062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.902492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.902773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.902808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.907095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.907364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.907393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.911702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.911991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.912021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.916420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.916688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.916725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.921030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.921301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.921331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.925650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.925904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.925936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.930278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.930566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.930596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.934830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.935089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.935120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.939447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.939731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.939761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.944125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.944394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.944423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.948725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.948990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.949020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.953237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.953504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.953534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.957988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.958255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.958285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.962565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.962822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.962853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.967140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.967392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.967423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.971753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.972014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.972044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.976357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.976629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.976658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.980985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.981238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.981283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.985596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.985864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.985894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.990250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.990504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.990548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.994810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.995078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.124 [2024-07-15 16:16:56.995109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.124 [2024-07-15 16:16:56.999433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.124 [2024-07-15 16:16:56.999727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:56.999757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.004025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.004294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.004324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.008623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.008893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.008922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.013300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.013554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.013587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.018328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.018584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.018615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.024231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.024486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.024517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.030311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.030620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.030651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.037094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.037383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.037413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.043567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.043839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.043876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.049715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.049987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.050018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.055691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.055984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.056015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.061863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.062152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.062182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.068704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.069022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.069057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.075052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.075332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.075362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.081210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.081536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.081567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.087313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.087569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.087600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.094029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.094357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.094387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.100685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.101038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.101069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.107166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.107526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.107555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.114161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.114434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.114464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.125 [2024-07-15 16:16:57.120613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.125 [2024-07-15 16:16:57.120992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.125 [2024-07-15 16:16:57.121023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.126930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.127217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.127249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.132986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.133260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.133291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.139202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.139486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.139532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.145212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.145507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.145537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.151265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.151544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.151575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.157850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.158200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.158231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.164835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.165131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.165163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.171646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.171952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.172005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.178506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.178782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.178813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.185644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.185949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.186002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.192463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.192762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.192791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.199423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.199733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.199763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.206452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.206753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.206782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.213847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.214125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.214163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.220725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.221086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.221116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.227709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.228094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.228125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.234644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.235020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.235054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.241708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.241973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.242004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.247989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.248282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.248312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.254830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.255190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.255220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.261815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.262142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.262173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.268952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.269229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.269260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.275708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.275971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.276001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.389 [2024-07-15 16:16:57.282803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1516af0) with pdu=0x2000190fef90 00:24:11.389 [2024-07-15 16:16:57.283099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.389 [2024-07-15 16:16:57.283130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.389 00:24:11.389 Latency(us) 00:24:11.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.389 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:11.389 nvme0n1 : 2.00 5739.49 717.44 0.00 0.00 2779.47 2135.99 8786.68 00:24:11.390 =================================================================================================================== 00:24:11.390 Total : 5739.49 717.44 0.00 0.00 2779.47 2135.99 8786.68 00:24:11.390 0 00:24:11.390 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:11.390 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:11.390 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:11.390 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:11.390 | .driver_specific 00:24:11.390 | .nvme_error 00:24:11.390 | .status_code 00:24:11.390 | .command_transient_transport_error' 00:24:11.648 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 370 > 0 )) 00:24:11.648 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 881824 00:24:11.648 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 881824 ']' 00:24:11.648 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 881824 00:24:11.648 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:11.648 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.648 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 881824 00:24:11.648 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:11.648 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:11.648 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 881824' 00:24:11.648 killing process with pid 881824 00:24:11.648 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 881824 00:24:11.648 Received shutdown signal, test time was about 2.000000 seconds 00:24:11.648 00:24:11.648 Latency(us) 00:24:11.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.648 =================================================================================================================== 00:24:11.648 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:11.649 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 881824 00:24:11.907 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 880364 00:24:11.907 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 880364 ']' 00:24:11.907 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 880364 00:24:11.907 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:11.907 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.907 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 880364 00:24:11.907 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:11.907 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:11.907 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 880364' 00:24:11.907 killing process with pid 880364 00:24:11.907 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 880364 00:24:11.907 16:16:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 880364 00:24:12.166 00:24:12.166 real 0m15.425s 00:24:12.166 user 0m30.448s 00:24:12.166 sys 0m4.222s 00:24:12.166 16:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.166 16:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:12.166 ************************************ 00:24:12.166 END TEST nvmf_digest_error 00:24:12.166 ************************************ 00:24:12.166 16:16:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:12.166 16:16:58 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:12.166 16:16:58 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:12.166 16:16:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:12.166 16:16:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:12.166 16:16:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:12.166 16:16:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:12.166 16:16:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:12.166 16:16:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:12.166 rmmod nvme_tcp 00:24:12.426 rmmod nvme_fabrics 00:24:12.426 rmmod nvme_keyring 00:24:12.426 16:16:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:12.426 16:16:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:12.426 16:16:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:12.426 16:16:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 880364 ']' 00:24:12.426 16:16:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 880364 00:24:12.426 16:16:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 880364 ']' 00:24:12.426 16:16:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 880364 00:24:12.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (880364) - No such process 00:24:12.426 16:16:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 880364 is not found' 00:24:12.426 Process with pid 880364 is not found 00:24:12.426 16:16:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.426 16:16:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.426 16:16:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.426 16:16:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.426 16:16:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.426 16:16:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.426 16:16:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.426 16:16:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.331 16:17:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:14.331 00:24:14.331 real 0m35.333s 00:24:14.331 user 1m2.121s 00:24:14.331 sys 0m9.942s 00:24:14.331 16:17:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:14.331 16:17:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:14.331 ************************************ 00:24:14.331 END TEST nvmf_digest 00:24:14.331 ************************************ 00:24:14.331 16:17:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:14.331 16:17:00 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:24:14.331 16:17:00 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:24:14.331 16:17:00 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:24:14.331 16:17:00 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:14.331 16:17:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:14.331 16:17:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:14.331 16:17:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:14.331 ************************************ 00:24:14.331 START TEST nvmf_bdevperf 00:24:14.331 ************************************ 00:24:14.331 16:17:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:14.589 * Looking for test storage... 00:24:14.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.589 16:17:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:14.590 16:17:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:16.495 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:16.495 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:16.495 Found net devices under 0000:09:00.0: cvl_0_0 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:16.495 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:16.496 Found net devices under 0000:09:00.1: cvl_0_1 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.496 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:16.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:24:16.769 00:24:16.769 --- 10.0.0.2 ping statistics --- 00:24:16.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.769 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:24:16.769 00:24:16.769 --- 10.0.0.1 ping statistics --- 00:24:16.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.769 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=884186 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 884186 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 884186 ']' 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.769 16:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:16.769 [2024-07-15 16:17:02.691999] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:24:16.769 [2024-07-15 16:17:02.692089] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.769 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.769 [2024-07-15 16:17:02.756355] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:17.029 [2024-07-15 16:17:02.865122] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.029 [2024-07-15 16:17:02.865175] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.029 [2024-07-15 16:17:02.865198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.029 [2024-07-15 16:17:02.865210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.029 [2024-07-15 16:17:02.865220] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.029 [2024-07-15 16:17:02.865311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.029 [2024-07-15 16:17:02.868990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.029 [2024-07-15 16:17:02.869066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:17.965 [2024-07-15 16:17:03.696052] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:17.965 Malloc0 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:17.965 [2024-07-15 16:17:03.761589] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.965 { 00:24:17.965 "params": { 00:24:17.965 "name": "Nvme$subsystem", 00:24:17.965 "trtype": "$TEST_TRANSPORT", 00:24:17.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.965 "adrfam": "ipv4", 00:24:17.965 "trsvcid": "$NVMF_PORT", 00:24:17.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.965 "hdgst": ${hdgst:-false}, 00:24:17.965 "ddgst": ${ddgst:-false} 00:24:17.965 }, 00:24:17.965 "method": "bdev_nvme_attach_controller" 00:24:17.965 } 00:24:17.965 EOF 00:24:17.965 )") 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:17.965 16:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:17.965 "params": { 00:24:17.965 "name": "Nvme1", 00:24:17.965 "trtype": "tcp", 00:24:17.965 "traddr": "10.0.0.2", 00:24:17.965 "adrfam": "ipv4", 00:24:17.965 "trsvcid": "4420", 00:24:17.965 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.965 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:17.965 "hdgst": false, 00:24:17.965 "ddgst": false 00:24:17.965 }, 00:24:17.965 "method": "bdev_nvme_attach_controller" 00:24:17.965 }' 00:24:17.965 [2024-07-15 16:17:03.809229] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:24:17.965 [2024-07-15 16:17:03.809313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884337 ] 00:24:17.965 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.965 [2024-07-15 16:17:03.868756] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.225 [2024-07-15 16:17:03.982945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.483 Running I/O for 1 seconds... 00:24:19.419 00:24:19.419 Latency(us) 00:24:19.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.419 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:19.419 Verification LBA range: start 0x0 length 0x4000 00:24:19.419 Nvme1n1 : 1.00 8866.60 34.64 0.00 0.00 14362.30 1517.04 15146.10 00:24:19.419 =================================================================================================================== 00:24:19.419 Total : 8866.60 34.64 0.00 0.00 14362.30 1517.04 15146.10 00:24:19.678 16:17:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=884497 00:24:19.678 16:17:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:19.678 16:17:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:19.678 16:17:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:19.678 16:17:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:19.678 16:17:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:19.678 16:17:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:19.678 16:17:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:19.678 { 00:24:19.678 "params": { 00:24:19.678 "name": "Nvme$subsystem", 00:24:19.678 "trtype": "$TEST_TRANSPORT", 00:24:19.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:19.678 "adrfam": "ipv4", 00:24:19.678 "trsvcid": "$NVMF_PORT", 00:24:19.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:19.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:19.678 "hdgst": ${hdgst:-false}, 00:24:19.678 "ddgst": ${ddgst:-false} 00:24:19.678 }, 00:24:19.678 "method": "bdev_nvme_attach_controller" 00:24:19.678 } 00:24:19.678 EOF 00:24:19.678 )") 00:24:19.678 16:17:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:19.678 16:17:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:19.678 16:17:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:19.678 16:17:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:19.678 "params": { 00:24:19.678 "name": "Nvme1", 00:24:19.678 "trtype": "tcp", 00:24:19.678 "traddr": "10.0.0.2", 00:24:19.678 "adrfam": "ipv4", 00:24:19.678 "trsvcid": "4420", 00:24:19.678 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.678 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:19.678 "hdgst": false, 00:24:19.678 "ddgst": false 00:24:19.678 }, 00:24:19.678 "method": "bdev_nvme_attach_controller" 00:24:19.678 }' 00:24:19.678 [2024-07-15 16:17:05.610616] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:24:19.678 [2024-07-15 16:17:05.610708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884497 ] 00:24:19.678 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.679 [2024-07-15 16:17:05.674213] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.938 [2024-07-15 16:17:05.786355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.196 Running I/O for 15 seconds... 00:24:22.726 16:17:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 884186 00:24:22.726 16:17:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:22.726 [2024-07-15 16:17:08.573885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.726 [2024-07-15 16:17:08.573934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.726 [2024-07-15 16:17:08.573995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.726 [2024-07-15 16:17:08.574013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.726 [2024-07-15 16:17:08.574039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.726 [2024-07-15 16:17:08.574057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.726 [2024-07-15 16:17:08.574073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.726 [2024-07-15 16:17:08.574089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.726 [2024-07-15 16:17:08.574105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.726 [2024-07-15 16:17:08.574120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.726 [2024-07-15 16:17:08.574137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.726 [2024-07-15 16:17:08.574153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.726 [2024-07-15 16:17:08.574172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.726 [2024-07-15 16:17:08.574187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.726 [2024-07-15 16:17:08.574204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.726 [2024-07-15 16:17:08.574219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.726 [2024-07-15 16:17:08.574236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.726 [2024-07-15 16:17:08.574273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.726 [2024-07-15 16:17:08.574287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.726 [2024-07-15 16:17:08.574300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.726 [2024-07-15 16:17:08.574314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.726 [2024-07-15 16:17:08.574342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.727 [2024-07-15 16:17:08.574712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.727 [2024-07-15 16:17:08.574739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.574971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.574989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.727 [2024-07-15 16:17:08.575519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.727 [2024-07-15 16:17:08.575533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.728 [2024-07-15 16:17:08.575545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.575558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.728 [2024-07-15 16:17:08.575574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.575588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.728 [2024-07-15 16:17:08.575600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.575614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.728 [2024-07-15 16:17:08.575627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.575641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.728 [2024-07-15 16:17:08.575653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.575667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.728 [2024-07-15 16:17:08.575680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.575693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.728 [2024-07-15 16:17:08.575706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.575719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.728 [2024-07-15 16:17:08.575731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.575745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.728 [2024-07-15 16:17:08.575757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.575771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.728 [2024-07-15 16:17:08.575783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.575796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.728 [2024-07-15 16:17:08.575809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.575823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.575835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.575848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.575861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.575875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.575887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.575904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.575917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.575931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.575965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.575984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.575998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.728 [2024-07-15 16:17:08.576541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.728 [2024-07-15 16:17:08.576702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.728 [2024-07-15 16:17:08.576715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.576727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.576741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.576752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.576766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.576778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.576791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.576803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.576816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.576828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.576841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.576854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.576868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.576879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.576893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.576905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.576918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.576930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.576965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.576982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.576997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.729 [2024-07-15 16:17:08.577706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bc4c0 is same with the state(5) to be set 00:24:22.729 [2024-07-15 16:17:08.577733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.729 [2024-07-15 16:17:08.577746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.729 [2024-07-15 16:17:08.577756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47968 len:8 PRP1 0x0 PRP2 0x0 00:24:22.729 [2024-07-15 16:17:08.577768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577821] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11bc4c0 was disconnected and freed. reset controller. 00:24:22.729 [2024-07-15 16:17:08.577897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.729 [2024-07-15 16:17:08.577923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.729 [2024-07-15 16:17:08.577980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.577998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.729 [2024-07-15 16:17:08.578011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.729 [2024-07-15 16:17:08.578025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.730 [2024-07-15 16:17:08.578039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.730 [2024-07-15 16:17:08.578052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.730 [2024-07-15 16:17:08.581061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.730 [2024-07-15 16:17:08.581099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.730 [2024-07-15 16:17:08.581738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-07-15 16:17:08.581777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.730 [2024-07-15 16:17:08.581793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.730 [2024-07-15 16:17:08.582044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.730 [2024-07-15 16:17:08.582280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.730 [2024-07-15 16:17:08.582315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.730 [2024-07-15 16:17:08.582329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.730 [2024-07-15 16:17:08.585486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.730 [2024-07-15 16:17:08.594613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.730 [2024-07-15 16:17:08.595009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-07-15 16:17:08.595039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.730 [2024-07-15 16:17:08.595055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.730 [2024-07-15 16:17:08.595292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.730 [2024-07-15 16:17:08.595498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.730 [2024-07-15 16:17:08.595521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.730 [2024-07-15 16:17:08.595535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.730 [2024-07-15 16:17:08.598512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.730 [2024-07-15 16:17:08.607786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.730 [2024-07-15 16:17:08.608137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-07-15 16:17:08.608165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.730 [2024-07-15 16:17:08.608180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.730 [2024-07-15 16:17:08.608413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.730 [2024-07-15 16:17:08.608616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.730 [2024-07-15 16:17:08.608635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.730 [2024-07-15 16:17:08.608647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.730 [2024-07-15 16:17:08.611429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.730 [2024-07-15 16:17:08.620843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.730 [2024-07-15 16:17:08.621193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-07-15 16:17:08.621221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.730 [2024-07-15 16:17:08.621236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.730 [2024-07-15 16:17:08.621450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.730 [2024-07-15 16:17:08.621652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.730 [2024-07-15 16:17:08.621671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.730 [2024-07-15 16:17:08.621683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.730 [2024-07-15 16:17:08.624539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.730 [2024-07-15 16:17:08.634132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.730 [2024-07-15 16:17:08.634526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-07-15 16:17:08.634565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.730 [2024-07-15 16:17:08.634580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.730 [2024-07-15 16:17:08.634796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.730 [2024-07-15 16:17:08.635023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.730 [2024-07-15 16:17:08.635044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.730 [2024-07-15 16:17:08.635056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.730 [2024-07-15 16:17:08.638144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.730 [2024-07-15 16:17:08.647433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.730 [2024-07-15 16:17:08.647892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-07-15 16:17:08.647941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.730 [2024-07-15 16:17:08.647966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.730 [2024-07-15 16:17:08.648195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.730 [2024-07-15 16:17:08.648445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.730 [2024-07-15 16:17:08.648464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.730 [2024-07-15 16:17:08.648476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.730 [2024-07-15 16:17:08.651531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.730 [2024-07-15 16:17:08.660711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.730 [2024-07-15 16:17:08.661126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-07-15 16:17:08.661154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.730 [2024-07-15 16:17:08.661173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.730 [2024-07-15 16:17:08.661409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.730 [2024-07-15 16:17:08.661611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.730 [2024-07-15 16:17:08.661630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.730 [2024-07-15 16:17:08.661643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.730 [2024-07-15 16:17:08.664549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.730 [2024-07-15 16:17:08.673703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.730 [2024-07-15 16:17:08.674120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-07-15 16:17:08.674148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.730 [2024-07-15 16:17:08.674171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.730 [2024-07-15 16:17:08.674403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.730 [2024-07-15 16:17:08.674606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.730 [2024-07-15 16:17:08.674625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.730 [2024-07-15 16:17:08.674638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.730 [2024-07-15 16:17:08.677543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.730 [2024-07-15 16:17:08.686709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.730 [2024-07-15 16:17:08.687055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-07-15 16:17:08.687083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.730 [2024-07-15 16:17:08.687099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.730 [2024-07-15 16:17:08.687339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.730 [2024-07-15 16:17:08.687541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.730 [2024-07-15 16:17:08.687560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.730 [2024-07-15 16:17:08.687573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.730 [2024-07-15 16:17:08.690486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.730 [2024-07-15 16:17:08.699674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.730 [2024-07-15 16:17:08.700092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-07-15 16:17:08.700120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.730 [2024-07-15 16:17:08.700136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.730 [2024-07-15 16:17:08.700370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.730 [2024-07-15 16:17:08.700591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.730 [2024-07-15 16:17:08.700610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.730 [2024-07-15 16:17:08.700623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.731 [2024-07-15 16:17:08.703646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.731 [2024-07-15 16:17:08.712812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.731 [2024-07-15 16:17:08.713234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-07-15 16:17:08.713262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.731 [2024-07-15 16:17:08.713278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.731 [2024-07-15 16:17:08.713510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.731 [2024-07-15 16:17:08.713713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.731 [2024-07-15 16:17:08.713733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.731 [2024-07-15 16:17:08.713745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.731 [2024-07-15 16:17:08.716627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.731 [2024-07-15 16:17:08.726206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.731 [2024-07-15 16:17:08.726642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-07-15 16:17:08.726669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.731 [2024-07-15 16:17:08.726685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.731 [2024-07-15 16:17:08.726942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.990 [2024-07-15 16:17:08.727194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.990 [2024-07-15 16:17:08.727217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.990 [2024-07-15 16:17:08.727246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.990 [2024-07-15 16:17:08.730343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.990 [2024-07-15 16:17:08.739559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.990 [2024-07-15 16:17:08.739902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.990 [2024-07-15 16:17:08.739929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.990 [2024-07-15 16:17:08.739945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.990 [2024-07-15 16:17:08.740203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.990 [2024-07-15 16:17:08.740409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.990 [2024-07-15 16:17:08.740428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.990 [2024-07-15 16:17:08.740440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.990 [2024-07-15 16:17:08.743298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.990 [2024-07-15 16:17:08.752641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.990 [2024-07-15 16:17:08.753044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.990 [2024-07-15 16:17:08.753072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.990 [2024-07-15 16:17:08.753087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.990 [2024-07-15 16:17:08.753324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.990 [2024-07-15 16:17:08.753526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.990 [2024-07-15 16:17:08.753545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.990 [2024-07-15 16:17:08.753557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.990 [2024-07-15 16:17:08.756461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.990 [2024-07-15 16:17:08.765669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.990 [2024-07-15 16:17:08.766011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.990 [2024-07-15 16:17:08.766038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.990 [2024-07-15 16:17:08.766054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.990 [2024-07-15 16:17:08.766267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.990 [2024-07-15 16:17:08.766469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.990 [2024-07-15 16:17:08.766488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.990 [2024-07-15 16:17:08.766501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.990 [2024-07-15 16:17:08.769365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.990 [2024-07-15 16:17:08.778680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.990 [2024-07-15 16:17:08.779028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.990 [2024-07-15 16:17:08.779055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.990 [2024-07-15 16:17:08.779071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.990 [2024-07-15 16:17:08.779306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.990 [2024-07-15 16:17:08.779508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.990 [2024-07-15 16:17:08.779528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.990 [2024-07-15 16:17:08.779540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.990 [2024-07-15 16:17:08.782426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.990 [2024-07-15 16:17:08.791743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.990 [2024-07-15 16:17:08.792158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.990 [2024-07-15 16:17:08.792186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.990 [2024-07-15 16:17:08.792202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.990 [2024-07-15 16:17:08.792434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.990 [2024-07-15 16:17:08.792637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.990 [2024-07-15 16:17:08.792656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.990 [2024-07-15 16:17:08.792668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.990 [2024-07-15 16:17:08.795453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.990 [2024-07-15 16:17:08.804695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.990 [2024-07-15 16:17:08.805050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.990 [2024-07-15 16:17:08.805076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.991 [2024-07-15 16:17:08.805091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.991 [2024-07-15 16:17:08.805286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.991 [2024-07-15 16:17:08.805506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.991 [2024-07-15 16:17:08.805525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.991 [2024-07-15 16:17:08.805538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.991 [2024-07-15 16:17:08.808422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.991 [2024-07-15 16:17:08.817776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.991 [2024-07-15 16:17:08.818089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.991 [2024-07-15 16:17:08.818116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.991 [2024-07-15 16:17:08.818132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.991 [2024-07-15 16:17:08.818347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.991 [2024-07-15 16:17:08.818555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.991 [2024-07-15 16:17:08.818574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.991 [2024-07-15 16:17:08.818586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.991 [2024-07-15 16:17:08.821474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.991 [2024-07-15 16:17:08.830836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.991 [2024-07-15 16:17:08.831229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.991 [2024-07-15 16:17:08.831257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.991 [2024-07-15 16:17:08.831273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.991 [2024-07-15 16:17:08.831503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.991 [2024-07-15 16:17:08.831705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.991 [2024-07-15 16:17:08.831724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.991 [2024-07-15 16:17:08.831736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.991 [2024-07-15 16:17:08.834984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.991 [2024-07-15 16:17:08.844195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.991 [2024-07-15 16:17:08.844588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.991 [2024-07-15 16:17:08.844617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.991 [2024-07-15 16:17:08.844633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.991 [2024-07-15 16:17:08.844884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.991 [2024-07-15 16:17:08.845121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.991 [2024-07-15 16:17:08.845142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.991 [2024-07-15 16:17:08.845155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.991 [2024-07-15 16:17:08.848026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.991 [2024-07-15 16:17:08.857338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.991 [2024-07-15 16:17:08.857755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.991 [2024-07-15 16:17:08.857783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.991 [2024-07-15 16:17:08.857800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.991 [2024-07-15 16:17:08.858046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.991 [2024-07-15 16:17:08.858276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.991 [2024-07-15 16:17:08.858296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.991 [2024-07-15 16:17:08.858309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.991 [2024-07-15 16:17:08.861167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.991 [2024-07-15 16:17:08.870410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.991 [2024-07-15 16:17:08.870825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.991 [2024-07-15 16:17:08.870852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.991 [2024-07-15 16:17:08.870870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.991 [2024-07-15 16:17:08.871124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.991 [2024-07-15 16:17:08.871336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.991 [2024-07-15 16:17:08.871355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.991 [2024-07-15 16:17:08.871367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.991 [2024-07-15 16:17:08.874206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.991 [2024-07-15 16:17:08.883375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.991 [2024-07-15 16:17:08.883782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.991 [2024-07-15 16:17:08.883810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.991 [2024-07-15 16:17:08.883826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.991 [2024-07-15 16:17:08.884071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.991 [2024-07-15 16:17:08.884285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.991 [2024-07-15 16:17:08.884304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.991 [2024-07-15 16:17:08.884332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.991 [2024-07-15 16:17:08.887172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.991 [2024-07-15 16:17:08.896470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.991 [2024-07-15 16:17:08.896811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.991 [2024-07-15 16:17:08.896838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.991 [2024-07-15 16:17:08.896854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.991 [2024-07-15 16:17:08.897118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.991 [2024-07-15 16:17:08.897329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.991 [2024-07-15 16:17:08.897348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.991 [2024-07-15 16:17:08.897361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.991 [2024-07-15 16:17:08.900197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.991 [2024-07-15 16:17:08.909526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.991 [2024-07-15 16:17:08.909927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.991 [2024-07-15 16:17:08.909953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.991 [2024-07-15 16:17:08.909991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.991 [2024-07-15 16:17:08.910221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.991 [2024-07-15 16:17:08.910423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.991 [2024-07-15 16:17:08.910443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.991 [2024-07-15 16:17:08.910455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.991 [2024-07-15 16:17:08.913196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.991 [2024-07-15 16:17:08.922642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.991 [2024-07-15 16:17:08.923071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.991 [2024-07-15 16:17:08.923098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.991 [2024-07-15 16:17:08.923113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.991 [2024-07-15 16:17:08.923314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.991 [2024-07-15 16:17:08.923534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.991 [2024-07-15 16:17:08.923553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.991 [2024-07-15 16:17:08.923565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.991 [2024-07-15 16:17:08.926463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.991 [2024-07-15 16:17:08.935756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.991 [2024-07-15 16:17:08.936215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.991 [2024-07-15 16:17:08.936243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.991 [2024-07-15 16:17:08.936263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.991 [2024-07-15 16:17:08.936502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.991 [2024-07-15 16:17:08.936694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.991 [2024-07-15 16:17:08.936713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.991 [2024-07-15 16:17:08.936726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.991 [2024-07-15 16:17:08.939793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.991 [2024-07-15 16:17:08.949331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.991 [2024-07-15 16:17:08.949755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.991 [2024-07-15 16:17:08.949782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.991 [2024-07-15 16:17:08.949798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.991 [2024-07-15 16:17:08.950067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.991 [2024-07-15 16:17:08.950299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.991 [2024-07-15 16:17:08.950338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.991 [2024-07-15 16:17:08.950352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.991 [2024-07-15 16:17:08.953629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.991 [2024-07-15 16:17:08.962899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.991 [2024-07-15 16:17:08.963228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.991 [2024-07-15 16:17:08.963256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.991 [2024-07-15 16:17:08.963273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.991 [2024-07-15 16:17:08.963502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.991 [2024-07-15 16:17:08.963748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.991 [2024-07-15 16:17:08.963769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.991 [2024-07-15 16:17:08.963782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.991 [2024-07-15 16:17:08.967016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.991 [2024-07-15 16:17:08.976484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.991 [2024-07-15 16:17:08.976842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.991 [2024-07-15 16:17:08.976871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.991 [2024-07-15 16:17:08.976888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.991 [2024-07-15 16:17:08.977111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.991 [2024-07-15 16:17:08.977352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.991 [2024-07-15 16:17:08.977374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.991 [2024-07-15 16:17:08.977388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.991 [2024-07-15 16:17:08.980654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.991 [2024-07-15 16:17:08.990060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.991 [2024-07-15 16:17:08.990408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.991 [2024-07-15 16:17:08.990437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:22.991 [2024-07-15 16:17:08.990454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:22.992 [2024-07-15 16:17:08.990669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:22.992 [2024-07-15 16:17:08.990892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.992 [2024-07-15 16:17:08.990914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.992 [2024-07-15 16:17:08.990927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.253 [2024-07-15 16:17:08.994222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.253 [2024-07-15 16:17:09.003645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.253 [2024-07-15 16:17:09.003979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.253 [2024-07-15 16:17:09.004008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.253 [2024-07-15 16:17:09.004025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.253 [2024-07-15 16:17:09.004239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.253 [2024-07-15 16:17:09.004488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.253 [2024-07-15 16:17:09.004509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.253 [2024-07-15 16:17:09.004522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.253 [2024-07-15 16:17:09.007725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.253 [2024-07-15 16:17:09.017155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.253 [2024-07-15 16:17:09.017606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.253 [2024-07-15 16:17:09.017636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.253 [2024-07-15 16:17:09.017653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.253 [2024-07-15 16:17:09.017884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.253 [2024-07-15 16:17:09.018140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.253 [2024-07-15 16:17:09.018164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.253 [2024-07-15 16:17:09.018179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.253 [2024-07-15 16:17:09.021457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.253 [2024-07-15 16:17:09.030549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.253 [2024-07-15 16:17:09.030922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.253 [2024-07-15 16:17:09.031026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.253 [2024-07-15 16:17:09.031044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.253 [2024-07-15 16:17:09.031258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.253 [2024-07-15 16:17:09.031462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.253 [2024-07-15 16:17:09.031482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.253 [2024-07-15 16:17:09.031496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.253 [2024-07-15 16:17:09.034473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.253 [2024-07-15 16:17:09.043777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.253 [2024-07-15 16:17:09.044114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.254 [2024-07-15 16:17:09.044144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.254 [2024-07-15 16:17:09.044161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.254 [2024-07-15 16:17:09.044401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.254 [2024-07-15 16:17:09.044604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.254 [2024-07-15 16:17:09.044625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.254 [2024-07-15 16:17:09.044638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.254 [2024-07-15 16:17:09.047643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.254 [2024-07-15 16:17:09.057098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.254 [2024-07-15 16:17:09.057529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.254 [2024-07-15 16:17:09.057557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.254 [2024-07-15 16:17:09.057573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.254 [2024-07-15 16:17:09.057810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.254 [2024-07-15 16:17:09.058049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.254 [2024-07-15 16:17:09.058072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.254 [2024-07-15 16:17:09.058087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.254 [2024-07-15 16:17:09.060984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.254 [2024-07-15 16:17:09.070073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.254 [2024-07-15 16:17:09.070545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.254 [2024-07-15 16:17:09.070573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.254 [2024-07-15 16:17:09.070588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.254 [2024-07-15 16:17:09.070833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.254 [2024-07-15 16:17:09.071091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.254 [2024-07-15 16:17:09.071115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.254 [2024-07-15 16:17:09.071129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.254 [2024-07-15 16:17:09.074111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.254 [2024-07-15 16:17:09.083114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.254 [2024-07-15 16:17:09.083458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.254 [2024-07-15 16:17:09.083486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.254 [2024-07-15 16:17:09.083502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.254 [2024-07-15 16:17:09.083742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.254 [2024-07-15 16:17:09.083968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.254 [2024-07-15 16:17:09.083989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.254 [2024-07-15 16:17:09.084021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.254 [2024-07-15 16:17:09.087042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.254 [2024-07-15 16:17:09.096465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.254 [2024-07-15 16:17:09.096815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.254 [2024-07-15 16:17:09.096846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.254 [2024-07-15 16:17:09.096873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.254 [2024-07-15 16:17:09.097147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.254 [2024-07-15 16:17:09.097370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.254 [2024-07-15 16:17:09.097391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.254 [2024-07-15 16:17:09.097404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.254 [2024-07-15 16:17:09.100278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.254 [2024-07-15 16:17:09.109514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.254 [2024-07-15 16:17:09.109856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.254 [2024-07-15 16:17:09.109885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.254 [2024-07-15 16:17:09.109900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.254 [2024-07-15 16:17:09.110161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.254 [2024-07-15 16:17:09.110369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.254 [2024-07-15 16:17:09.110390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.254 [2024-07-15 16:17:09.110403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.254 [2024-07-15 16:17:09.113251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.254 [2024-07-15 16:17:09.122640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.254 [2024-07-15 16:17:09.123046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.254 [2024-07-15 16:17:09.123074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.254 [2024-07-15 16:17:09.123090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.254 [2024-07-15 16:17:09.123324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.254 [2024-07-15 16:17:09.123526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.254 [2024-07-15 16:17:09.123547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.254 [2024-07-15 16:17:09.123560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.254 [2024-07-15 16:17:09.126449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.254 [2024-07-15 16:17:09.135714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.254 [2024-07-15 16:17:09.136029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.254 [2024-07-15 16:17:09.136057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.254 [2024-07-15 16:17:09.136073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.254 [2024-07-15 16:17:09.136290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.254 [2024-07-15 16:17:09.136493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.254 [2024-07-15 16:17:09.136514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.254 [2024-07-15 16:17:09.136527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.254 [2024-07-15 16:17:09.139413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.254 [2024-07-15 16:17:09.148838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.254 [2024-07-15 16:17:09.149157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.254 [2024-07-15 16:17:09.149185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.254 [2024-07-15 16:17:09.149200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.254 [2024-07-15 16:17:09.149415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.254 [2024-07-15 16:17:09.149620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.254 [2024-07-15 16:17:09.149640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.254 [2024-07-15 16:17:09.149653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.254 [2024-07-15 16:17:09.152602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.255 [2024-07-15 16:17:09.161982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.255 [2024-07-15 16:17:09.162354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.255 [2024-07-15 16:17:09.162382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.255 [2024-07-15 16:17:09.162398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.255 [2024-07-15 16:17:09.162613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.255 [2024-07-15 16:17:09.162816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.255 [2024-07-15 16:17:09.162837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.255 [2024-07-15 16:17:09.162850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.255 [2024-07-15 16:17:09.165714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.255 [2024-07-15 16:17:09.175142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.255 [2024-07-15 16:17:09.175566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.255 [2024-07-15 16:17:09.175594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.255 [2024-07-15 16:17:09.175610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.255 [2024-07-15 16:17:09.175848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.255 [2024-07-15 16:17:09.176085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.255 [2024-07-15 16:17:09.176107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.255 [2024-07-15 16:17:09.176121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.255 [2024-07-15 16:17:09.178980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.255 [2024-07-15 16:17:09.188168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.255 [2024-07-15 16:17:09.188514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.255 [2024-07-15 16:17:09.188542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.255 [2024-07-15 16:17:09.188558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.255 [2024-07-15 16:17:09.188792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.255 [2024-07-15 16:17:09.189041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.255 [2024-07-15 16:17:09.189063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.255 [2024-07-15 16:17:09.189077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.255 [2024-07-15 16:17:09.191862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.255 [2024-07-15 16:17:09.201358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.255 [2024-07-15 16:17:09.201764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.255 [2024-07-15 16:17:09.201790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.255 [2024-07-15 16:17:09.201805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.255 [2024-07-15 16:17:09.202052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.255 [2024-07-15 16:17:09.202280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.255 [2024-07-15 16:17:09.202315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.255 [2024-07-15 16:17:09.202328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.255 [2024-07-15 16:17:09.205171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.255 [2024-07-15 16:17:09.214426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.255 [2024-07-15 16:17:09.214832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.255 [2024-07-15 16:17:09.214860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.255 [2024-07-15 16:17:09.214875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.255 [2024-07-15 16:17:09.215122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.255 [2024-07-15 16:17:09.215332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.255 [2024-07-15 16:17:09.215354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.255 [2024-07-15 16:17:09.215368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.255 [2024-07-15 16:17:09.218215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.255 [2024-07-15 16:17:09.227477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.255 [2024-07-15 16:17:09.227822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.255 [2024-07-15 16:17:09.227850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.255 [2024-07-15 16:17:09.227866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.255 [2024-07-15 16:17:09.228133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.255 [2024-07-15 16:17:09.228345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.255 [2024-07-15 16:17:09.228365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.255 [2024-07-15 16:17:09.228378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.255 [2024-07-15 16:17:09.231225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.255 [2024-07-15 16:17:09.240555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.255 [2024-07-15 16:17:09.240927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.255 [2024-07-15 16:17:09.240964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.255 [2024-07-15 16:17:09.240997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.255 [2024-07-15 16:17:09.241219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.255 [2024-07-15 16:17:09.241438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.255 [2024-07-15 16:17:09.241458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.255 [2024-07-15 16:17:09.241471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.255 [2024-07-15 16:17:09.244331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.255 [2024-07-15 16:17:09.253796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.255 [2024-07-15 16:17:09.254182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.255 [2024-07-15 16:17:09.254211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.255 [2024-07-15 16:17:09.254228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.255 [2024-07-15 16:17:09.254469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.255 [2024-07-15 16:17:09.254669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.255 [2024-07-15 16:17:09.254690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.255 [2024-07-15 16:17:09.254704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.523 [2024-07-15 16:17:09.257829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.523 [2024-07-15 16:17:09.267058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.523 [2024-07-15 16:17:09.267445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.523 [2024-07-15 16:17:09.267479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.524 [2024-07-15 16:17:09.267497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.524 [2024-07-15 16:17:09.267739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.524 [2024-07-15 16:17:09.267938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.524 [2024-07-15 16:17:09.267986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.524 [2024-07-15 16:17:09.268003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.524 [2024-07-15 16:17:09.271044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.524 [2024-07-15 16:17:09.280394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.524 [2024-07-15 16:17:09.280741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.524 [2024-07-15 16:17:09.280770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.524 [2024-07-15 16:17:09.280785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.524 [2024-07-15 16:17:09.281017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.524 [2024-07-15 16:17:09.281232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.524 [2024-07-15 16:17:09.281254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.524 [2024-07-15 16:17:09.281282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.524 [2024-07-15 16:17:09.284175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.524 [2024-07-15 16:17:09.293516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.524 [2024-07-15 16:17:09.293923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.524 [2024-07-15 16:17:09.293951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.524 [2024-07-15 16:17:09.293994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.524 [2024-07-15 16:17:09.294233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.524 [2024-07-15 16:17:09.294458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.524 [2024-07-15 16:17:09.294478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.524 [2024-07-15 16:17:09.294490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.524 [2024-07-15 16:17:09.297350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.524 [2024-07-15 16:17:09.306700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.524 [2024-07-15 16:17:09.307040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.524 [2024-07-15 16:17:09.307067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.524 [2024-07-15 16:17:09.307083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.524 [2024-07-15 16:17:09.307317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.524 [2024-07-15 16:17:09.307525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.524 [2024-07-15 16:17:09.307545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.524 [2024-07-15 16:17:09.307559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.524 [2024-07-15 16:17:09.310458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.524 [2024-07-15 16:17:09.319877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.524 [2024-07-15 16:17:09.320309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.524 [2024-07-15 16:17:09.320337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.524 [2024-07-15 16:17:09.320352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.524 [2024-07-15 16:17:09.320587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.524 [2024-07-15 16:17:09.320789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.524 [2024-07-15 16:17:09.320809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.524 [2024-07-15 16:17:09.320822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.524 [2024-07-15 16:17:09.323724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.524 [2024-07-15 16:17:09.333077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.524 [2024-07-15 16:17:09.333445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.524 [2024-07-15 16:17:09.333473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.524 [2024-07-15 16:17:09.333488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.524 [2024-07-15 16:17:09.333722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.524 [2024-07-15 16:17:09.333924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.524 [2024-07-15 16:17:09.333944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.524 [2024-07-15 16:17:09.333980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.524 [2024-07-15 16:17:09.337105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.524 [2024-07-15 16:17:09.346125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.524 [2024-07-15 16:17:09.346516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.524 [2024-07-15 16:17:09.346583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.524 [2024-07-15 16:17:09.346598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.524 [2024-07-15 16:17:09.346824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.524 [2024-07-15 16:17:09.347071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.524 [2024-07-15 16:17:09.347093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.524 [2024-07-15 16:17:09.347106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.524 [2024-07-15 16:17:09.350022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.524 [2024-07-15 16:17:09.359264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.524 [2024-07-15 16:17:09.359612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.524 [2024-07-15 16:17:09.359639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.524 [2024-07-15 16:17:09.359655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.524 [2024-07-15 16:17:09.359890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.524 [2024-07-15 16:17:09.360125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.524 [2024-07-15 16:17:09.360147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.524 [2024-07-15 16:17:09.360160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.524 [2024-07-15 16:17:09.363018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.524 [2024-07-15 16:17:09.372334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.524 [2024-07-15 16:17:09.372679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.524 [2024-07-15 16:17:09.372706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.524 [2024-07-15 16:17:09.372722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.524 [2024-07-15 16:17:09.372963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.524 [2024-07-15 16:17:09.373176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.524 [2024-07-15 16:17:09.373197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.524 [2024-07-15 16:17:09.373210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.524 [2024-07-15 16:17:09.376092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.524 [2024-07-15 16:17:09.385441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.524 [2024-07-15 16:17:09.385738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.524 [2024-07-15 16:17:09.385765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.524 [2024-07-15 16:17:09.385781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.524 [2024-07-15 16:17:09.386001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.524 [2024-07-15 16:17:09.386194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.524 [2024-07-15 16:17:09.386223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.524 [2024-07-15 16:17:09.386235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.524 [2024-07-15 16:17:09.389133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.524 [2024-07-15 16:17:09.398597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.524 [2024-07-15 16:17:09.399006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.524 [2024-07-15 16:17:09.399035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.524 [2024-07-15 16:17:09.399056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.524 [2024-07-15 16:17:09.399297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.524 [2024-07-15 16:17:09.399485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.524 [2024-07-15 16:17:09.399504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.524 [2024-07-15 16:17:09.399517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.524 [2024-07-15 16:17:09.402373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.524 [2024-07-15 16:17:09.411843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.524 [2024-07-15 16:17:09.412276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.524 [2024-07-15 16:17:09.412319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.524 [2024-07-15 16:17:09.412335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.524 [2024-07-15 16:17:09.412569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.524 [2024-07-15 16:17:09.412773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.524 [2024-07-15 16:17:09.412792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.524 [2024-07-15 16:17:09.412805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.524 [2024-07-15 16:17:09.415707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.524 [2024-07-15 16:17:09.425047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.524 [2024-07-15 16:17:09.425376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.524 [2024-07-15 16:17:09.425405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.524 [2024-07-15 16:17:09.425420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.524 [2024-07-15 16:17:09.425636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.524 [2024-07-15 16:17:09.425840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.524 [2024-07-15 16:17:09.425860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.524 [2024-07-15 16:17:09.425873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.524 [2024-07-15 16:17:09.428768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.524 [2024-07-15 16:17:09.438129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.524 [2024-07-15 16:17:09.438473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.524 [2024-07-15 16:17:09.438500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.524 [2024-07-15 16:17:09.438514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.524 [2024-07-15 16:17:09.438723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.524 [2024-07-15 16:17:09.438925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.524 [2024-07-15 16:17:09.438968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.524 [2024-07-15 16:17:09.439012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.524 [2024-07-15 16:17:09.441876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.524 [2024-07-15 16:17:09.451170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.524 [2024-07-15 16:17:09.451512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.525 [2024-07-15 16:17:09.451540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.525 [2024-07-15 16:17:09.451556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.525 [2024-07-15 16:17:09.451789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.525 [2024-07-15 16:17:09.452032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.525 [2024-07-15 16:17:09.452054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.525 [2024-07-15 16:17:09.452067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.525 [2024-07-15 16:17:09.454922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.525 [2024-07-15 16:17:09.464228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.525 [2024-07-15 16:17:09.464677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.525 [2024-07-15 16:17:09.464704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.525 [2024-07-15 16:17:09.464719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.525 [2024-07-15 16:17:09.464947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.525 [2024-07-15 16:17:09.465170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.525 [2024-07-15 16:17:09.465191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.525 [2024-07-15 16:17:09.465203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.525 [2024-07-15 16:17:09.468181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.525 [2024-07-15 16:17:09.477351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.525 [2024-07-15 16:17:09.477754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.525 [2024-07-15 16:17:09.477782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.525 [2024-07-15 16:17:09.477797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.525 [2024-07-15 16:17:09.478045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.525 [2024-07-15 16:17:09.478258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.525 [2024-07-15 16:17:09.478279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.525 [2024-07-15 16:17:09.478292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.525 [2024-07-15 16:17:09.481093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.525 [2024-07-15 16:17:09.490343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.525 [2024-07-15 16:17:09.490690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.525 [2024-07-15 16:17:09.490718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.525 [2024-07-15 16:17:09.490734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.525 [2024-07-15 16:17:09.490980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.525 [2024-07-15 16:17:09.491192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.525 [2024-07-15 16:17:09.491213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.525 [2024-07-15 16:17:09.491228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.525 [2024-07-15 16:17:09.494101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.525 [2024-07-15 16:17:09.503435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.525 [2024-07-15 16:17:09.503839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.525 [2024-07-15 16:17:09.503867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.525 [2024-07-15 16:17:09.503883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.525 [2024-07-15 16:17:09.504136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.525 [2024-07-15 16:17:09.504380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.525 [2024-07-15 16:17:09.504401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.525 [2024-07-15 16:17:09.504413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.525 [2024-07-15 16:17:09.507272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.525 [2024-07-15 16:17:09.516742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.525 [2024-07-15 16:17:09.517120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.525 [2024-07-15 16:17:09.517148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.525 [2024-07-15 16:17:09.517164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.525 [2024-07-15 16:17:09.517405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.525 [2024-07-15 16:17:09.517632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.525 [2024-07-15 16:17:09.517667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.525 [2024-07-15 16:17:09.517680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.794 [2024-07-15 16:17:09.520834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.795 [2024-07-15 16:17:09.529842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.795 [2024-07-15 16:17:09.530257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.795 [2024-07-15 16:17:09.530286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.795 [2024-07-15 16:17:09.530318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.795 [2024-07-15 16:17:09.530569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.795 [2024-07-15 16:17:09.530783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.795 [2024-07-15 16:17:09.530805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.795 [2024-07-15 16:17:09.530819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.795 [2024-07-15 16:17:09.533800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.795 [2024-07-15 16:17:09.542895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.795 [2024-07-15 16:17:09.543289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.795 [2024-07-15 16:17:09.543333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.795 [2024-07-15 16:17:09.543348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.795 [2024-07-15 16:17:09.543594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.795 [2024-07-15 16:17:09.543781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.795 [2024-07-15 16:17:09.543801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.795 [2024-07-15 16:17:09.543814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.795 [2024-07-15 16:17:09.546722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.795 [2024-07-15 16:17:09.555919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.795 [2024-07-15 16:17:09.556267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.795 [2024-07-15 16:17:09.556295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.795 [2024-07-15 16:17:09.556311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.795 [2024-07-15 16:17:09.556546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.795 [2024-07-15 16:17:09.556750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.795 [2024-07-15 16:17:09.556770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.795 [2024-07-15 16:17:09.556783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.795 [2024-07-15 16:17:09.559645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.795 [2024-07-15 16:17:09.569049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.795 [2024-07-15 16:17:09.569516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.795 [2024-07-15 16:17:09.569570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.795 [2024-07-15 16:17:09.569586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.795 [2024-07-15 16:17:09.569827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.795 [2024-07-15 16:17:09.570043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.795 [2024-07-15 16:17:09.570065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.795 [2024-07-15 16:17:09.570083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.795 [2024-07-15 16:17:09.572920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.795 [2024-07-15 16:17:09.582160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.795 [2024-07-15 16:17:09.582566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.795 [2024-07-15 16:17:09.582594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.795 [2024-07-15 16:17:09.582611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.795 [2024-07-15 16:17:09.582847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.795 [2024-07-15 16:17:09.583098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.795 [2024-07-15 16:17:09.583120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.795 [2024-07-15 16:17:09.583134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.795 [2024-07-15 16:17:09.586015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.795 [2024-07-15 16:17:09.595591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.795 [2024-07-15 16:17:09.595979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.795 [2024-07-15 16:17:09.596008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.795 [2024-07-15 16:17:09.596039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.795 [2024-07-15 16:17:09.596276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.795 [2024-07-15 16:17:09.596480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.795 [2024-07-15 16:17:09.596501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.795 [2024-07-15 16:17:09.596515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.795 [2024-07-15 16:17:09.599538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.795 [2024-07-15 16:17:09.608628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.795 [2024-07-15 16:17:09.608942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.795 [2024-07-15 16:17:09.609020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.795 [2024-07-15 16:17:09.609035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.795 [2024-07-15 16:17:09.609263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.795 [2024-07-15 16:17:09.609465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.795 [2024-07-15 16:17:09.609486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.795 [2024-07-15 16:17:09.609499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.795 [2024-07-15 16:17:09.612317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.795 [2024-07-15 16:17:09.621760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.795 [2024-07-15 16:17:09.622176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.795 [2024-07-15 16:17:09.622208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.795 [2024-07-15 16:17:09.622225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.795 [2024-07-15 16:17:09.622459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.795 [2024-07-15 16:17:09.622662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.795 [2024-07-15 16:17:09.622682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.795 [2024-07-15 16:17:09.622696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.795 [2024-07-15 16:17:09.625583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.795 [2024-07-15 16:17:09.634871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.795 [2024-07-15 16:17:09.635226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.795 [2024-07-15 16:17:09.635254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.795 [2024-07-15 16:17:09.635270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.795 [2024-07-15 16:17:09.635507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.795 [2024-07-15 16:17:09.635709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.795 [2024-07-15 16:17:09.635731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.795 [2024-07-15 16:17:09.635743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.795 [2024-07-15 16:17:09.638628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.795 [2024-07-15 16:17:09.648187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.795 [2024-07-15 16:17:09.648607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.795 [2024-07-15 16:17:09.648634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.795 [2024-07-15 16:17:09.648649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.795 [2024-07-15 16:17:09.648873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.795 [2024-07-15 16:17:09.649110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.795 [2024-07-15 16:17:09.649132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.795 [2024-07-15 16:17:09.649146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.795 [2024-07-15 16:17:09.652262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.795 [2024-07-15 16:17:09.661462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.795 [2024-07-15 16:17:09.661802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.795 [2024-07-15 16:17:09.661830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.796 [2024-07-15 16:17:09.661845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.796 [2024-07-15 16:17:09.662081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.796 [2024-07-15 16:17:09.662329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.796 [2024-07-15 16:17:09.662349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.796 [2024-07-15 16:17:09.662362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.796 [2024-07-15 16:17:09.665404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.796 [2024-07-15 16:17:09.674816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.796 [2024-07-15 16:17:09.675174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.796 [2024-07-15 16:17:09.675202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.796 [2024-07-15 16:17:09.675218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.796 [2024-07-15 16:17:09.675451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.796 [2024-07-15 16:17:09.675653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.796 [2024-07-15 16:17:09.675673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.796 [2024-07-15 16:17:09.675686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.796 [2024-07-15 16:17:09.678698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.796 [2024-07-15 16:17:09.688166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.796 [2024-07-15 16:17:09.688548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.796 [2024-07-15 16:17:09.688576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.796 [2024-07-15 16:17:09.688592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.796 [2024-07-15 16:17:09.688829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.796 [2024-07-15 16:17:09.689064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.796 [2024-07-15 16:17:09.689086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.796 [2024-07-15 16:17:09.689099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.796 [2024-07-15 16:17:09.692051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.796 [2024-07-15 16:17:09.701267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.796 [2024-07-15 16:17:09.701673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.796 [2024-07-15 16:17:09.701701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.796 [2024-07-15 16:17:09.701717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.796 [2024-07-15 16:17:09.701951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.796 [2024-07-15 16:17:09.702156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.796 [2024-07-15 16:17:09.702177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.796 [2024-07-15 16:17:09.702190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.796 [2024-07-15 16:17:09.704946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.796 [2024-07-15 16:17:09.714370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.796 [2024-07-15 16:17:09.714728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.796 [2024-07-15 16:17:09.714754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.796 [2024-07-15 16:17:09.714769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.796 [2024-07-15 16:17:09.714976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.796 [2024-07-15 16:17:09.715174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.796 [2024-07-15 16:17:09.715195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.796 [2024-07-15 16:17:09.715209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.796 [2024-07-15 16:17:09.718068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.796 [2024-07-15 16:17:09.727420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.796 [2024-07-15 16:17:09.727825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.796 [2024-07-15 16:17:09.727854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.796 [2024-07-15 16:17:09.727872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.796 [2024-07-15 16:17:09.728103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.796 [2024-07-15 16:17:09.728316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.796 [2024-07-15 16:17:09.728337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.796 [2024-07-15 16:17:09.728349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.796 [2024-07-15 16:17:09.731277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.796 [2024-07-15 16:17:09.740569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.796 [2024-07-15 16:17:09.740883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.796 [2024-07-15 16:17:09.740948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.796 [2024-07-15 16:17:09.740974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.796 [2024-07-15 16:17:09.741190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.796 [2024-07-15 16:17:09.741413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.796 [2024-07-15 16:17:09.741434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.796 [2024-07-15 16:17:09.741447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.796 [2024-07-15 16:17:09.744305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.796 [2024-07-15 16:17:09.753553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.796 [2024-07-15 16:17:09.753927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.796 [2024-07-15 16:17:09.754005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.796 [2024-07-15 16:17:09.754025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.796 [2024-07-15 16:17:09.754254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.796 [2024-07-15 16:17:09.754457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.796 [2024-07-15 16:17:09.754478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.796 [2024-07-15 16:17:09.754490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.796 [2024-07-15 16:17:09.757273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.796 [2024-07-15 16:17:09.766525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.796 [2024-07-15 16:17:09.767003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.796 [2024-07-15 16:17:09.767031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.796 [2024-07-15 16:17:09.767047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.796 [2024-07-15 16:17:09.767290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.796 [2024-07-15 16:17:09.767478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.796 [2024-07-15 16:17:09.767498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.796 [2024-07-15 16:17:09.767511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.796 [2024-07-15 16:17:09.770295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.796 [2024-07-15 16:17:09.779549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.796 [2024-07-15 16:17:09.779935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.796 [2024-07-15 16:17:09.780012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.796 [2024-07-15 16:17:09.780029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.796 [2024-07-15 16:17:09.780261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.796 [2024-07-15 16:17:09.780465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.796 [2024-07-15 16:17:09.780486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.796 [2024-07-15 16:17:09.780499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.796 [2024-07-15 16:17:09.783284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.796 [2024-07-15 16:17:09.792638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.796 [2024-07-15 16:17:09.793017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.796 [2024-07-15 16:17:09.793046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:23.796 [2024-07-15 16:17:09.793063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:23.796 [2024-07-15 16:17:09.793299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:23.796 [2024-07-15 16:17:09.793518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.797 [2024-07-15 16:17:09.793543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.797 [2024-07-15 16:17:09.793557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.797 [2024-07-15 16:17:09.796679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.055 [2024-07-15 16:17:09.805833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.055 [2024-07-15 16:17:09.806236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.055 [2024-07-15 16:17:09.806265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.055 [2024-07-15 16:17:09.806280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.055 [2024-07-15 16:17:09.806510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.055 [2024-07-15 16:17:09.806713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.055 [2024-07-15 16:17:09.806733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.055 [2024-07-15 16:17:09.806745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.055 [2024-07-15 16:17:09.809632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.055 [2024-07-15 16:17:09.818903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.055 [2024-07-15 16:17:09.819253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.055 [2024-07-15 16:17:09.819282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.055 [2024-07-15 16:17:09.819298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.055 [2024-07-15 16:17:09.819532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.055 [2024-07-15 16:17:09.819735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.055 [2024-07-15 16:17:09.819756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.055 [2024-07-15 16:17:09.819768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.055 [2024-07-15 16:17:09.822662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.055 [2024-07-15 16:17:09.831994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.055 [2024-07-15 16:17:09.832363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.055 [2024-07-15 16:17:09.832390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.055 [2024-07-15 16:17:09.832405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.055 [2024-07-15 16:17:09.832619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.055 [2024-07-15 16:17:09.832822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.055 [2024-07-15 16:17:09.832842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.055 [2024-07-15 16:17:09.832855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.055 [2024-07-15 16:17:09.835757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.055 [2024-07-15 16:17:09.845004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.055 [2024-07-15 16:17:09.845419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.055 [2024-07-15 16:17:09.845447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.055 [2024-07-15 16:17:09.845462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.055 [2024-07-15 16:17:09.845697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.055 [2024-07-15 16:17:09.845899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.055 [2024-07-15 16:17:09.845920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.055 [2024-07-15 16:17:09.845933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.055 [2024-07-15 16:17:09.849030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.055 [2024-07-15 16:17:09.858296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.055 [2024-07-15 16:17:09.858679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.056 [2024-07-15 16:17:09.858708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.056 [2024-07-15 16:17:09.858725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.056 [2024-07-15 16:17:09.858968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.056 [2024-07-15 16:17:09.859184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.056 [2024-07-15 16:17:09.859206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.056 [2024-07-15 16:17:09.859219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.056 [2024-07-15 16:17:09.862115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.056 [2024-07-15 16:17:09.871402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.056 [2024-07-15 16:17:09.871758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.056 [2024-07-15 16:17:09.871785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.056 [2024-07-15 16:17:09.871799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.056 [2024-07-15 16:17:09.872023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.056 [2024-07-15 16:17:09.872238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.056 [2024-07-15 16:17:09.872259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.056 [2024-07-15 16:17:09.872287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.056 [2024-07-15 16:17:09.875125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.056 [2024-07-15 16:17:09.884600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.056 [2024-07-15 16:17:09.885015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.056 [2024-07-15 16:17:09.885044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.056 [2024-07-15 16:17:09.885060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.056 [2024-07-15 16:17:09.885300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.056 [2024-07-15 16:17:09.885503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.056 [2024-07-15 16:17:09.885523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.056 [2024-07-15 16:17:09.885536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.056 [2024-07-15 16:17:09.888406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.056 [2024-07-15 16:17:09.897605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.056 [2024-07-15 16:17:09.897950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.056 [2024-07-15 16:17:09.897986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.056 [2024-07-15 16:17:09.898002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.056 [2024-07-15 16:17:09.898242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.056 [2024-07-15 16:17:09.898445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.056 [2024-07-15 16:17:09.898465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.056 [2024-07-15 16:17:09.898478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.056 [2024-07-15 16:17:09.901348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.056 [2024-07-15 16:17:09.910739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.056 [2024-07-15 16:17:09.911152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.056 [2024-07-15 16:17:09.911181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.056 [2024-07-15 16:17:09.911198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.056 [2024-07-15 16:17:09.911433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.056 [2024-07-15 16:17:09.911636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.056 [2024-07-15 16:17:09.911656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.056 [2024-07-15 16:17:09.911669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.056 [2024-07-15 16:17:09.914595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.056 [2024-07-15 16:17:09.923795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.056 [2024-07-15 16:17:09.924172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.056 [2024-07-15 16:17:09.924200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.056 [2024-07-15 16:17:09.924216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.056 [2024-07-15 16:17:09.924436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.056 [2024-07-15 16:17:09.924641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.056 [2024-07-15 16:17:09.924661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.056 [2024-07-15 16:17:09.924678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.056 [2024-07-15 16:17:09.927567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.056 [2024-07-15 16:17:09.936924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.056 [2024-07-15 16:17:09.937272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.056 [2024-07-15 16:17:09.937300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.056 [2024-07-15 16:17:09.937316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.056 [2024-07-15 16:17:09.937531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.056 [2024-07-15 16:17:09.937734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.056 [2024-07-15 16:17:09.937754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.056 [2024-07-15 16:17:09.937768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.056 [2024-07-15 16:17:09.940639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.056 [2024-07-15 16:17:09.950043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.056 [2024-07-15 16:17:09.950389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.056 [2024-07-15 16:17:09.950417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.056 [2024-07-15 16:17:09.950432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.056 [2024-07-15 16:17:09.950668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.056 [2024-07-15 16:17:09.950870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.056 [2024-07-15 16:17:09.950891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.056 [2024-07-15 16:17:09.950903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.056 [2024-07-15 16:17:09.953774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.056 [2024-07-15 16:17:09.963140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.056 [2024-07-15 16:17:09.963484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.056 [2024-07-15 16:17:09.963511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.056 [2024-07-15 16:17:09.963527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.056 [2024-07-15 16:17:09.963761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.056 [2024-07-15 16:17:09.963990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.056 [2024-07-15 16:17:09.964027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.056 [2024-07-15 16:17:09.964041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.056 [2024-07-15 16:17:09.966907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.056 [2024-07-15 16:17:09.976313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.056 [2024-07-15 16:17:09.976734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.056 [2024-07-15 16:17:09.976766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.056 [2024-07-15 16:17:09.976781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.056 [2024-07-15 16:17:09.977043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.056 [2024-07-15 16:17:09.977236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.056 [2024-07-15 16:17:09.977255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.056 [2024-07-15 16:17:09.977268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.056 [2024-07-15 16:17:09.980234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.056 [2024-07-15 16:17:09.989500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.056 [2024-07-15 16:17:09.989904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.056 [2024-07-15 16:17:09.989946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.056 [2024-07-15 16:17:09.989975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.056 [2024-07-15 16:17:09.990234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.056 [2024-07-15 16:17:09.990456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.056 [2024-07-15 16:17:09.990476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.056 [2024-07-15 16:17:09.990488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.056 [2024-07-15 16:17:09.993473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.056 [2024-07-15 16:17:10.002991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.056 [2024-07-15 16:17:10.003358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.056 [2024-07-15 16:17:10.003386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.056 [2024-07-15 16:17:10.003402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.056 [2024-07-15 16:17:10.003629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.056 [2024-07-15 16:17:10.003849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.056 [2024-07-15 16:17:10.003869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.056 [2024-07-15 16:17:10.003883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.056 [2024-07-15 16:17:10.007130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.056 [2024-07-15 16:17:10.016237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.056 [2024-07-15 16:17:10.016676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.056 [2024-07-15 16:17:10.016707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.056 [2024-07-15 16:17:10.016724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.056 [2024-07-15 16:17:10.016969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.056 [2024-07-15 16:17:10.017188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.056 [2024-07-15 16:17:10.017209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.056 [2024-07-15 16:17:10.017222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.056 [2024-07-15 16:17:10.020105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.056 [2024-07-15 16:17:10.029837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.056 [2024-07-15 16:17:10.030206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.056 [2024-07-15 16:17:10.030235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.056 [2024-07-15 16:17:10.030251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.056 [2024-07-15 16:17:10.030482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.056 [2024-07-15 16:17:10.030716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.056 [2024-07-15 16:17:10.030736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.056 [2024-07-15 16:17:10.030749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.056 [2024-07-15 16:17:10.033918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.056 [2024-07-15 16:17:10.043357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.056 [2024-07-15 16:17:10.043690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.056 [2024-07-15 16:17:10.043719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.056 [2024-07-15 16:17:10.043735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.056 [2024-07-15 16:17:10.043986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.056 [2024-07-15 16:17:10.044204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.056 [2024-07-15 16:17:10.044226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.056 [2024-07-15 16:17:10.044241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.056 [2024-07-15 16:17:10.047431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.056 [2024-07-15 16:17:10.057177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.056 [2024-07-15 16:17:10.057550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.056 [2024-07-15 16:17:10.057578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.056 [2024-07-15 16:17:10.057594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.056 [2024-07-15 16:17:10.057836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.056 [2024-07-15 16:17:10.058088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.056 [2024-07-15 16:17:10.058110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.056 [2024-07-15 16:17:10.058125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.314 [2024-07-15 16:17:10.061477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.314 [2024-07-15 16:17:10.070834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.314 [2024-07-15 16:17:10.071181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.314 [2024-07-15 16:17:10.071229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.314 [2024-07-15 16:17:10.071245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.314 [2024-07-15 16:17:10.071490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.314 [2024-07-15 16:17:10.071705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.314 [2024-07-15 16:17:10.071726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.314 [2024-07-15 16:17:10.071740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.314 [2024-07-15 16:17:10.074976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.314 [2024-07-15 16:17:10.084462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.314 [2024-07-15 16:17:10.084825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.314 [2024-07-15 16:17:10.084861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.314 [2024-07-15 16:17:10.084893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.314 [2024-07-15 16:17:10.085132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.314 [2024-07-15 16:17:10.085375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.314 [2024-07-15 16:17:10.085396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.314 [2024-07-15 16:17:10.085409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.314 [2024-07-15 16:17:10.088561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.315 [2024-07-15 16:17:10.097753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.315 [2024-07-15 16:17:10.098056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.315 [2024-07-15 16:17:10.098084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.315 [2024-07-15 16:17:10.098101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.315 [2024-07-15 16:17:10.098341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.315 [2024-07-15 16:17:10.098558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.315 [2024-07-15 16:17:10.098577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.315 [2024-07-15 16:17:10.098590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.315 [2024-07-15 16:17:10.102029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.315 [2024-07-15 16:17:10.111259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.315 [2024-07-15 16:17:10.111646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.315 [2024-07-15 16:17:10.111674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.315 [2024-07-15 16:17:10.111694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.315 [2024-07-15 16:17:10.111917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.315 [2024-07-15 16:17:10.112159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.315 [2024-07-15 16:17:10.112182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.315 [2024-07-15 16:17:10.112196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.315 [2024-07-15 16:17:10.115453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.315 [2024-07-15 16:17:10.124764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.315 [2024-07-15 16:17:10.125104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.315 [2024-07-15 16:17:10.125132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.315 [2024-07-15 16:17:10.125148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.315 [2024-07-15 16:17:10.125377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.315 [2024-07-15 16:17:10.125603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.315 [2024-07-15 16:17:10.125624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.315 [2024-07-15 16:17:10.125637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.315 [2024-07-15 16:17:10.128839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.315 [2024-07-15 16:17:10.138383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.315 [2024-07-15 16:17:10.138797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.315 [2024-07-15 16:17:10.138825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.315 [2024-07-15 16:17:10.138841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.315 [2024-07-15 16:17:10.139061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.315 [2024-07-15 16:17:10.139290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.315 [2024-07-15 16:17:10.139324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.315 [2024-07-15 16:17:10.139337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.315 [2024-07-15 16:17:10.142306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.315 [2024-07-15 16:17:10.151634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.315 [2024-07-15 16:17:10.152048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.315 [2024-07-15 16:17:10.152076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.315 [2024-07-15 16:17:10.152092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.315 [2024-07-15 16:17:10.152323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.315 [2024-07-15 16:17:10.152529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.315 [2024-07-15 16:17:10.152552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.315 [2024-07-15 16:17:10.152565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.315 [2024-07-15 16:17:10.155542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.315 [2024-07-15 16:17:10.164981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.315 [2024-07-15 16:17:10.165344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.315 [2024-07-15 16:17:10.165370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.315 [2024-07-15 16:17:10.165385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.315 [2024-07-15 16:17:10.165599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.315 [2024-07-15 16:17:10.165802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.315 [2024-07-15 16:17:10.165821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.315 [2024-07-15 16:17:10.165833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.315 [2024-07-15 16:17:10.168815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.315 [2024-07-15 16:17:10.177929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.315 [2024-07-15 16:17:10.178339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.315 [2024-07-15 16:17:10.178367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.315 [2024-07-15 16:17:10.178382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.315 [2024-07-15 16:17:10.178615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.315 [2024-07-15 16:17:10.178818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.315 [2024-07-15 16:17:10.178837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.315 [2024-07-15 16:17:10.178850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.315 [2024-07-15 16:17:10.181740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.315 [2024-07-15 16:17:10.191000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.315 [2024-07-15 16:17:10.191407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.315 [2024-07-15 16:17:10.191435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.315 [2024-07-15 16:17:10.191450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.315 [2024-07-15 16:17:10.191685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.315 [2024-07-15 16:17:10.191888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.315 [2024-07-15 16:17:10.191907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.315 [2024-07-15 16:17:10.191919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.315 [2024-07-15 16:17:10.194824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.315 [2024-07-15 16:17:10.204009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.315 [2024-07-15 16:17:10.204354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.315 [2024-07-15 16:17:10.204381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.315 [2024-07-15 16:17:10.204396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.315 [2024-07-15 16:17:10.204625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.315 [2024-07-15 16:17:10.204829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.315 [2024-07-15 16:17:10.204848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.315 [2024-07-15 16:17:10.204860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.315 [2024-07-15 16:17:10.207761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.315 [2024-07-15 16:17:10.217062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.315 [2024-07-15 16:17:10.217464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.315 [2024-07-15 16:17:10.217492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.315 [2024-07-15 16:17:10.217507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.315 [2024-07-15 16:17:10.217741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.315 [2024-07-15 16:17:10.217968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.315 [2024-07-15 16:17:10.217987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.315 [2024-07-15 16:17:10.218014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.315 [2024-07-15 16:17:10.220800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.315 [2024-07-15 16:17:10.230097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.315 [2024-07-15 16:17:10.230508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.315 [2024-07-15 16:17:10.230554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.315 [2024-07-15 16:17:10.230569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.315 [2024-07-15 16:17:10.230798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.315 [2024-07-15 16:17:10.231027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.315 [2024-07-15 16:17:10.231048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.315 [2024-07-15 16:17:10.231061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.315 [2024-07-15 16:17:10.233904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.316 [2024-07-15 16:17:10.243208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.316 [2024-07-15 16:17:10.243580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.316 [2024-07-15 16:17:10.243629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.316 [2024-07-15 16:17:10.243664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.316 [2024-07-15 16:17:10.243913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.316 [2024-07-15 16:17:10.244147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.316 [2024-07-15 16:17:10.244168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.316 [2024-07-15 16:17:10.244182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.316 [2024-07-15 16:17:10.247055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.316 [2024-07-15 16:17:10.256250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.316 [2024-07-15 16:17:10.256679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.316 [2024-07-15 16:17:10.256727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.316 [2024-07-15 16:17:10.256742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.316 [2024-07-15 16:17:10.256985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.316 [2024-07-15 16:17:10.257183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.316 [2024-07-15 16:17:10.257203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.316 [2024-07-15 16:17:10.257216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.316 [2024-07-15 16:17:10.260074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.316 [2024-07-15 16:17:10.269334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.316 [2024-07-15 16:17:10.269741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.316 [2024-07-15 16:17:10.269768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.316 [2024-07-15 16:17:10.269783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.316 [2024-07-15 16:17:10.270029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.316 [2024-07-15 16:17:10.270228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.316 [2024-07-15 16:17:10.270248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.316 [2024-07-15 16:17:10.270275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.316 [2024-07-15 16:17:10.273112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.316 [2024-07-15 16:17:10.282307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.316 [2024-07-15 16:17:10.282660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.316 [2024-07-15 16:17:10.282708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.316 [2024-07-15 16:17:10.282724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.316 [2024-07-15 16:17:10.282979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.316 [2024-07-15 16:17:10.283186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.316 [2024-07-15 16:17:10.283206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.316 [2024-07-15 16:17:10.283224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.316 [2024-07-15 16:17:10.286039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.316 [2024-07-15 16:17:10.295449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.316 [2024-07-15 16:17:10.295821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.316 [2024-07-15 16:17:10.295857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.316 [2024-07-15 16:17:10.295889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.316 [2024-07-15 16:17:10.296168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.316 [2024-07-15 16:17:10.296399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.316 [2024-07-15 16:17:10.296419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.316 [2024-07-15 16:17:10.296431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.316 [2024-07-15 16:17:10.299285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.316 [2024-07-15 16:17:10.308433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.316 [2024-07-15 16:17:10.308783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.316 [2024-07-15 16:17:10.308832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.316 [2024-07-15 16:17:10.308848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.316 [2024-07-15 16:17:10.309090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.316 [2024-07-15 16:17:10.309303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.316 [2024-07-15 16:17:10.309337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.316 [2024-07-15 16:17:10.309349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.316 [2024-07-15 16:17:10.312188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.575 [2024-07-15 16:17:10.321734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.575 [2024-07-15 16:17:10.322101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.575 [2024-07-15 16:17:10.322138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.575 [2024-07-15 16:17:10.322172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.575 [2024-07-15 16:17:10.322432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.575 [2024-07-15 16:17:10.322653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.575 [2024-07-15 16:17:10.322673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.575 [2024-07-15 16:17:10.322687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.575 [2024-07-15 16:17:10.325595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.575 [2024-07-15 16:17:10.334926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.575 [2024-07-15 16:17:10.335246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.575 [2024-07-15 16:17:10.335278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.575 [2024-07-15 16:17:10.335294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.575 [2024-07-15 16:17:10.335512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.575 [2024-07-15 16:17:10.335716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.575 [2024-07-15 16:17:10.335736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.575 [2024-07-15 16:17:10.335748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.575 [2024-07-15 16:17:10.338630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.575 [2024-07-15 16:17:10.348023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.575 [2024-07-15 16:17:10.348375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.575 [2024-07-15 16:17:10.348423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.575 [2024-07-15 16:17:10.348438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.575 [2024-07-15 16:17:10.348664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.575 [2024-07-15 16:17:10.348892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.575 [2024-07-15 16:17:10.348914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.575 [2024-07-15 16:17:10.348927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.575 [2024-07-15 16:17:10.352145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.575 [2024-07-15 16:17:10.361082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.575 [2024-07-15 16:17:10.361480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.575 [2024-07-15 16:17:10.361550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.575 [2024-07-15 16:17:10.361566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.575 [2024-07-15 16:17:10.361794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.575 [2024-07-15 16:17:10.362022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.575 [2024-07-15 16:17:10.362043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.575 [2024-07-15 16:17:10.362055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.575 [2024-07-15 16:17:10.364810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.575 [2024-07-15 16:17:10.374048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.575 [2024-07-15 16:17:10.374338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.575 [2024-07-15 16:17:10.374378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.575 [2024-07-15 16:17:10.374393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.575 [2024-07-15 16:17:10.374587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.575 [2024-07-15 16:17:10.374813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.575 [2024-07-15 16:17:10.374832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.575 [2024-07-15 16:17:10.374845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.575 [2024-07-15 16:17:10.377726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.575 [2024-07-15 16:17:10.387097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.575 [2024-07-15 16:17:10.387521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.575 [2024-07-15 16:17:10.387548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.575 [2024-07-15 16:17:10.387564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.575 [2024-07-15 16:17:10.387799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.575 [2024-07-15 16:17:10.388026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.575 [2024-07-15 16:17:10.388061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.575 [2024-07-15 16:17:10.388074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.575 [2024-07-15 16:17:10.390915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.575 [2024-07-15 16:17:10.400070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.575 [2024-07-15 16:17:10.400485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.575 [2024-07-15 16:17:10.400513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.575 [2024-07-15 16:17:10.400528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.575 [2024-07-15 16:17:10.400763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.575 [2024-07-15 16:17:10.400991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.576 [2024-07-15 16:17:10.401012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.576 [2024-07-15 16:17:10.401024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.576 [2024-07-15 16:17:10.403886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.576 [2024-07-15 16:17:10.413080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.576 [2024-07-15 16:17:10.413424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.576 [2024-07-15 16:17:10.413451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.576 [2024-07-15 16:17:10.413466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.576 [2024-07-15 16:17:10.413701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.576 [2024-07-15 16:17:10.413904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.576 [2024-07-15 16:17:10.413923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.576 [2024-07-15 16:17:10.413935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.576 [2024-07-15 16:17:10.416816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.576 [2024-07-15 16:17:10.426146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.576 [2024-07-15 16:17:10.426489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.576 [2024-07-15 16:17:10.426517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.576 [2024-07-15 16:17:10.426532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.576 [2024-07-15 16:17:10.426766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.576 [2024-07-15 16:17:10.426994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.576 [2024-07-15 16:17:10.427030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.576 [2024-07-15 16:17:10.427044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.576 [2024-07-15 16:17:10.429908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.576 [2024-07-15 16:17:10.439262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.576 [2024-07-15 16:17:10.439667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.576 [2024-07-15 16:17:10.439695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.576 [2024-07-15 16:17:10.439710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.576 [2024-07-15 16:17:10.439945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.576 [2024-07-15 16:17:10.440166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.576 [2024-07-15 16:17:10.440187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.576 [2024-07-15 16:17:10.440200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.576 [2024-07-15 16:17:10.443054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.576 [2024-07-15 16:17:10.452396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.576 [2024-07-15 16:17:10.452800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.576 [2024-07-15 16:17:10.452828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.576 [2024-07-15 16:17:10.452843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.576 [2024-07-15 16:17:10.453094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.576 [2024-07-15 16:17:10.453317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.576 [2024-07-15 16:17:10.453336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.576 [2024-07-15 16:17:10.453348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.576 [2024-07-15 16:17:10.456227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.576 [2024-07-15 16:17:10.465494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.576 [2024-07-15 16:17:10.465897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.576 [2024-07-15 16:17:10.465923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.576 [2024-07-15 16:17:10.465943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.576 [2024-07-15 16:17:10.466199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.576 [2024-07-15 16:17:10.466418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.576 [2024-07-15 16:17:10.466437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.576 [2024-07-15 16:17:10.466450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.576 [2024-07-15 16:17:10.469347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.576 [2024-07-15 16:17:10.478601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.576 [2024-07-15 16:17:10.478952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.576 [2024-07-15 16:17:10.478998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.576 [2024-07-15 16:17:10.479014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.576 [2024-07-15 16:17:10.479236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.576 [2024-07-15 16:17:10.479463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.576 [2024-07-15 16:17:10.479482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.576 [2024-07-15 16:17:10.479495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.576 [2024-07-15 16:17:10.482352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.576 [2024-07-15 16:17:10.491779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.576 [2024-07-15 16:17:10.492180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.576 [2024-07-15 16:17:10.492220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.576 [2024-07-15 16:17:10.492236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.576 [2024-07-15 16:17:10.492489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.576 [2024-07-15 16:17:10.492691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.576 [2024-07-15 16:17:10.492711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.576 [2024-07-15 16:17:10.492723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.576 [2024-07-15 16:17:10.495583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.576 [2024-07-15 16:17:10.504920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.576 [2024-07-15 16:17:10.505299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.576 [2024-07-15 16:17:10.505335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.576 [2024-07-15 16:17:10.505351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.576 [2024-07-15 16:17:10.505586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.576 [2024-07-15 16:17:10.505788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.576 [2024-07-15 16:17:10.505811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.576 [2024-07-15 16:17:10.505824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.576 [2024-07-15 16:17:10.508712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.576 [2024-07-15 16:17:10.518050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.576 [2024-07-15 16:17:10.518434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.576 [2024-07-15 16:17:10.518470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.576 [2024-07-15 16:17:10.518485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.576 [2024-07-15 16:17:10.518711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.576 [2024-07-15 16:17:10.518898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.576 [2024-07-15 16:17:10.518917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.576 [2024-07-15 16:17:10.518930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.576 [2024-07-15 16:17:10.521827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.576 [2024-07-15 16:17:10.531331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.576 [2024-07-15 16:17:10.531641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.576 [2024-07-15 16:17:10.531713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.576 [2024-07-15 16:17:10.531739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.576 [2024-07-15 16:17:10.531977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.576 [2024-07-15 16:17:10.532185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.576 [2024-07-15 16:17:10.532204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.576 [2024-07-15 16:17:10.532217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.576 [2024-07-15 16:17:10.535075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.576 [2024-07-15 16:17:10.544393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.576 [2024-07-15 16:17:10.544788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.576 [2024-07-15 16:17:10.544842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.576 [2024-07-15 16:17:10.544868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.576 [2024-07-15 16:17:10.545107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.576 [2024-07-15 16:17:10.545331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.577 [2024-07-15 16:17:10.545350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.577 [2024-07-15 16:17:10.545363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.577 [2024-07-15 16:17:10.548202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.577 [2024-07-15 16:17:10.557511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.577 [2024-07-15 16:17:10.557917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.577 [2024-07-15 16:17:10.557944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.577 [2024-07-15 16:17:10.557966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.577 [2024-07-15 16:17:10.558211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.577 [2024-07-15 16:17:10.558414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.577 [2024-07-15 16:17:10.558433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.577 [2024-07-15 16:17:10.558445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.577 [2024-07-15 16:17:10.561370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.577 [2024-07-15 16:17:10.570765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.577 [2024-07-15 16:17:10.571144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.577 [2024-07-15 16:17:10.571176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.577 [2024-07-15 16:17:10.571192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.577 [2024-07-15 16:17:10.571441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.577 [2024-07-15 16:17:10.571643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.577 [2024-07-15 16:17:10.571663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.577 [2024-07-15 16:17:10.571675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.577 [2024-07-15 16:17:10.574664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.835 [2024-07-15 16:17:10.584170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.835 [2024-07-15 16:17:10.584551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.835 [2024-07-15 16:17:10.584578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.835 [2024-07-15 16:17:10.584594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.835 [2024-07-15 16:17:10.584827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.835 [2024-07-15 16:17:10.585073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.835 [2024-07-15 16:17:10.585094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.835 [2024-07-15 16:17:10.585107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.835 [2024-07-15 16:17:10.587975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.835 [2024-07-15 16:17:10.597325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.835 [2024-07-15 16:17:10.597741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.835 [2024-07-15 16:17:10.597789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.835 [2024-07-15 16:17:10.597806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.835 [2024-07-15 16:17:10.598046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.835 [2024-07-15 16:17:10.598254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.835 [2024-07-15 16:17:10.598273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.836 [2024-07-15 16:17:10.598285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.836 [2024-07-15 16:17:10.601363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.836 [2024-07-15 16:17:10.610385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.836 [2024-07-15 16:17:10.610846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.836 [2024-07-15 16:17:10.610895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.836 [2024-07-15 16:17:10.610912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.836 [2024-07-15 16:17:10.611188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.836 [2024-07-15 16:17:10.611411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.836 [2024-07-15 16:17:10.611431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.836 [2024-07-15 16:17:10.611443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.836 [2024-07-15 16:17:10.614300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.836 [2024-07-15 16:17:10.623353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.836 [2024-07-15 16:17:10.623807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.836 [2024-07-15 16:17:10.623859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.836 [2024-07-15 16:17:10.623876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.836 [2024-07-15 16:17:10.624137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.836 [2024-07-15 16:17:10.624384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.836 [2024-07-15 16:17:10.624403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.836 [2024-07-15 16:17:10.624416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.836 [2024-07-15 16:17:10.627274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.836 [2024-07-15 16:17:10.636354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.836 [2024-07-15 16:17:10.636757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.836 [2024-07-15 16:17:10.636784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.836 [2024-07-15 16:17:10.636799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.836 [2024-07-15 16:17:10.637047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.836 [2024-07-15 16:17:10.637260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.836 [2024-07-15 16:17:10.637280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.836 [2024-07-15 16:17:10.637312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.836 [2024-07-15 16:17:10.640150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.836 [2024-07-15 16:17:10.649357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.836 [2024-07-15 16:17:10.649724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.836 [2024-07-15 16:17:10.649760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.836 [2024-07-15 16:17:10.649775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.836 [2024-07-15 16:17:10.650000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.836 [2024-07-15 16:17:10.650216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.836 [2024-07-15 16:17:10.650236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.836 [2024-07-15 16:17:10.650248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.836 [2024-07-15 16:17:10.653005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.836 [2024-07-15 16:17:10.662661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.836 [2024-07-15 16:17:10.663006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.836 [2024-07-15 16:17:10.663034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.836 [2024-07-15 16:17:10.663051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.836 [2024-07-15 16:17:10.663264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.836 [2024-07-15 16:17:10.663467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.836 [2024-07-15 16:17:10.663485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.836 [2024-07-15 16:17:10.663498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.836 [2024-07-15 16:17:10.666521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.836 [2024-07-15 16:17:10.676024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.836 [2024-07-15 16:17:10.676419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.836 [2024-07-15 16:17:10.676457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.836 [2024-07-15 16:17:10.676472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.836 [2024-07-15 16:17:10.676705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.836 [2024-07-15 16:17:10.676908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.836 [2024-07-15 16:17:10.676927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.836 [2024-07-15 16:17:10.676962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.836 [2024-07-15 16:17:10.679969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.836 [2024-07-15 16:17:10.689332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.836 [2024-07-15 16:17:10.689735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.836 [2024-07-15 16:17:10.689767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.836 [2024-07-15 16:17:10.689792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.836 [2024-07-15 16:17:10.690030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.836 [2024-07-15 16:17:10.690254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.836 [2024-07-15 16:17:10.690274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.836 [2024-07-15 16:17:10.690288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.836 [2024-07-15 16:17:10.693042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.836 [2024-07-15 16:17:10.702275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.836 [2024-07-15 16:17:10.702645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.836 [2024-07-15 16:17:10.702681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.836 [2024-07-15 16:17:10.702696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.836 [2024-07-15 16:17:10.702914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.836 [2024-07-15 16:17:10.703147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.836 [2024-07-15 16:17:10.703168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.836 [2024-07-15 16:17:10.703181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.836 [2024-07-15 16:17:10.706037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.836 [2024-07-15 16:17:10.715290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.836 [2024-07-15 16:17:10.715631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.836 [2024-07-15 16:17:10.715657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.836 [2024-07-15 16:17:10.715672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.836 [2024-07-15 16:17:10.715886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.836 [2024-07-15 16:17:10.716134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.836 [2024-07-15 16:17:10.716154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.836 [2024-07-15 16:17:10.716168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.836 [2024-07-15 16:17:10.719024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.836 [2024-07-15 16:17:10.728364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.836 [2024-07-15 16:17:10.728670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.836 [2024-07-15 16:17:10.728698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.836 [2024-07-15 16:17:10.728712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.836 [2024-07-15 16:17:10.728931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.836 [2024-07-15 16:17:10.729165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.836 [2024-07-15 16:17:10.729185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.836 [2024-07-15 16:17:10.729198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.836 [2024-07-15 16:17:10.731951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.836 [2024-07-15 16:17:10.741580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.836 [2024-07-15 16:17:10.741920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.836 [2024-07-15 16:17:10.741980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.836 [2024-07-15 16:17:10.741997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.836 [2024-07-15 16:17:10.742234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.836 [2024-07-15 16:17:10.742436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.836 [2024-07-15 16:17:10.742455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.837 [2024-07-15 16:17:10.742468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.837 [2024-07-15 16:17:10.745361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.837 [2024-07-15 16:17:10.754676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.837 [2024-07-15 16:17:10.755083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.837 [2024-07-15 16:17:10.755110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.837 [2024-07-15 16:17:10.755126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.837 [2024-07-15 16:17:10.755365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.837 [2024-07-15 16:17:10.755566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.837 [2024-07-15 16:17:10.755585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.837 [2024-07-15 16:17:10.755598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.837 [2024-07-15 16:17:10.758481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.837 [2024-07-15 16:17:10.767686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.837 [2024-07-15 16:17:10.768089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.837 [2024-07-15 16:17:10.768117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.837 [2024-07-15 16:17:10.768133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.837 [2024-07-15 16:17:10.768368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.837 [2024-07-15 16:17:10.768572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.837 [2024-07-15 16:17:10.768591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.837 [2024-07-15 16:17:10.768603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.837 [2024-07-15 16:17:10.771500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.837 [2024-07-15 16:17:10.780728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.837 [2024-07-15 16:17:10.781072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.837 [2024-07-15 16:17:10.781100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.837 [2024-07-15 16:17:10.781116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.837 [2024-07-15 16:17:10.781349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.837 [2024-07-15 16:17:10.781552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.837 [2024-07-15 16:17:10.781571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.837 [2024-07-15 16:17:10.781583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.837 [2024-07-15 16:17:10.784488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.837 [2024-07-15 16:17:10.793839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.837 [2024-07-15 16:17:10.794189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.837 [2024-07-15 16:17:10.794216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.837 [2024-07-15 16:17:10.794232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.837 [2024-07-15 16:17:10.794467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.837 [2024-07-15 16:17:10.794669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.837 [2024-07-15 16:17:10.794688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.837 [2024-07-15 16:17:10.794701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.837 [2024-07-15 16:17:10.797581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.837 [2024-07-15 16:17:10.806928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.837 [2024-07-15 16:17:10.807287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.837 [2024-07-15 16:17:10.807314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.837 [2024-07-15 16:17:10.807329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.837 [2024-07-15 16:17:10.807562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.837 [2024-07-15 16:17:10.807764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.837 [2024-07-15 16:17:10.807784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.837 [2024-07-15 16:17:10.807796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.837 [2024-07-15 16:17:10.810589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.837 [2024-07-15 16:17:10.819999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.837 [2024-07-15 16:17:10.820296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.837 [2024-07-15 16:17:10.820337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.837 [2024-07-15 16:17:10.820356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.837 [2024-07-15 16:17:10.820551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.837 [2024-07-15 16:17:10.820769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.837 [2024-07-15 16:17:10.820788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.837 [2024-07-15 16:17:10.820800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.837 [2024-07-15 16:17:10.823616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.837 [2024-07-15 16:17:10.833025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.837 [2024-07-15 16:17:10.833334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.837 [2024-07-15 16:17:10.833361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:24.837 [2024-07-15 16:17:10.833377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:24.837 [2024-07-15 16:17:10.833591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:24.837 [2024-07-15 16:17:10.833794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.837 [2024-07-15 16:17:10.833814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.837 [2024-07-15 16:17:10.833826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.837 [2024-07-15 16:17:10.836873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.098 [2024-07-15 16:17:10.846301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.098 [2024-07-15 16:17:10.846703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.098 [2024-07-15 16:17:10.846730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.098 [2024-07-15 16:17:10.846746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.098 [2024-07-15 16:17:10.846997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.098 [2024-07-15 16:17:10.847217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.098 [2024-07-15 16:17:10.847238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.098 [2024-07-15 16:17:10.847266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.098 [2024-07-15 16:17:10.850040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.098 [2024-07-15 16:17:10.859383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.098 [2024-07-15 16:17:10.859691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.098 [2024-07-15 16:17:10.859718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.098 [2024-07-15 16:17:10.859734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.098 [2024-07-15 16:17:10.859948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.098 [2024-07-15 16:17:10.860165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.098 [2024-07-15 16:17:10.860190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.098 [2024-07-15 16:17:10.860204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.098 [2024-07-15 16:17:10.862959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.098 [2024-07-15 16:17:10.872393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.098 [2024-07-15 16:17:10.872736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.098 [2024-07-15 16:17:10.872764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.098 [2024-07-15 16:17:10.872780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.098 [2024-07-15 16:17:10.873027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.098 [2024-07-15 16:17:10.873227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.098 [2024-07-15 16:17:10.873267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.098 [2024-07-15 16:17:10.873282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.098 [2024-07-15 16:17:10.876158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.098 [2024-07-15 16:17:10.885456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.098 [2024-07-15 16:17:10.885860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.098 [2024-07-15 16:17:10.885888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.098 [2024-07-15 16:17:10.885904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.098 [2024-07-15 16:17:10.886181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.098 [2024-07-15 16:17:10.886381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.098 [2024-07-15 16:17:10.886416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.098 [2024-07-15 16:17:10.886430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.098 [2024-07-15 16:17:10.889292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.098 [2024-07-15 16:17:10.898460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.098 [2024-07-15 16:17:10.898864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.098 [2024-07-15 16:17:10.898893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.098 [2024-07-15 16:17:10.898909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.098 [2024-07-15 16:17:10.899176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.099 [2024-07-15 16:17:10.899386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.099 [2024-07-15 16:17:10.899406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.099 [2024-07-15 16:17:10.899419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.099 [2024-07-15 16:17:10.902278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.099 [2024-07-15 16:17:10.911435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.099 [2024-07-15 16:17:10.911751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.099 [2024-07-15 16:17:10.911779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.099 [2024-07-15 16:17:10.911795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.099 [2024-07-15 16:17:10.912022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.099 [2024-07-15 16:17:10.912242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.099 [2024-07-15 16:17:10.912277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.099 [2024-07-15 16:17:10.912291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.099 [2024-07-15 16:17:10.915149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.099 [2024-07-15 16:17:10.924533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.099 [2024-07-15 16:17:10.924906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.099 [2024-07-15 16:17:10.924944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.099 [2024-07-15 16:17:10.924987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.099 [2024-07-15 16:17:10.925232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.099 [2024-07-15 16:17:10.925438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.099 [2024-07-15 16:17:10.925459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.099 [2024-07-15 16:17:10.925472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.099 [2024-07-15 16:17:10.928332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.099 [2024-07-15 16:17:10.937593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.099 [2024-07-15 16:17:10.937998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.099 [2024-07-15 16:17:10.938025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.099 [2024-07-15 16:17:10.938040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.099 [2024-07-15 16:17:10.938269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.099 [2024-07-15 16:17:10.938473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.099 [2024-07-15 16:17:10.938493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.099 [2024-07-15 16:17:10.938506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.099 [2024-07-15 16:17:10.941391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.099 [2024-07-15 16:17:10.950645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.099 [2024-07-15 16:17:10.951073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.099 [2024-07-15 16:17:10.951102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.099 [2024-07-15 16:17:10.951118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.099 [2024-07-15 16:17:10.951365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.099 [2024-07-15 16:17:10.951569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.099 [2024-07-15 16:17:10.951590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.099 [2024-07-15 16:17:10.951603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.099 [2024-07-15 16:17:10.954599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.099 [2024-07-15 16:17:10.963752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.099 [2024-07-15 16:17:10.964080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.099 [2024-07-15 16:17:10.964109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.099 [2024-07-15 16:17:10.964126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.099 [2024-07-15 16:17:10.964369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.099 [2024-07-15 16:17:10.964558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.099 [2024-07-15 16:17:10.964578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.099 [2024-07-15 16:17:10.964591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.099 [2024-07-15 16:17:10.967562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.099 [2024-07-15 16:17:10.976885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.099 [2024-07-15 16:17:10.977299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.099 [2024-07-15 16:17:10.977326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.099 [2024-07-15 16:17:10.977341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.099 [2024-07-15 16:17:10.977576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.099 [2024-07-15 16:17:10.977778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.099 [2024-07-15 16:17:10.977799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.099 [2024-07-15 16:17:10.977812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.099 [2024-07-15 16:17:10.980683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.099 [2024-07-15 16:17:10.989879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.099 [2024-07-15 16:17:10.990289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.099 [2024-07-15 16:17:10.990317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.099 [2024-07-15 16:17:10.990332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.099 [2024-07-15 16:17:10.990567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.099 [2024-07-15 16:17:10.990770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.099 [2024-07-15 16:17:10.990790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.099 [2024-07-15 16:17:10.990808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.099 [2024-07-15 16:17:10.993695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.099 [2024-07-15 16:17:11.002963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.099 [2024-07-15 16:17:11.003306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.099 [2024-07-15 16:17:11.003333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.099 [2024-07-15 16:17:11.003348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.099 [2024-07-15 16:17:11.003576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.099 [2024-07-15 16:17:11.003780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.099 [2024-07-15 16:17:11.003800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.099 [2024-07-15 16:17:11.003813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.099 [2024-07-15 16:17:11.006685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.099 [2024-07-15 16:17:11.016034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.099 [2024-07-15 16:17:11.016441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.099 [2024-07-15 16:17:11.016469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.099 [2024-07-15 16:17:11.016484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.099 [2024-07-15 16:17:11.016719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.099 [2024-07-15 16:17:11.016921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.099 [2024-07-15 16:17:11.016941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.099 [2024-07-15 16:17:11.016964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.099 [2024-07-15 16:17:11.019844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.099 [2024-07-15 16:17:11.029111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.099 [2024-07-15 16:17:11.029545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.099 [2024-07-15 16:17:11.029573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.099 [2024-07-15 16:17:11.029588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.099 [2024-07-15 16:17:11.029821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.099 [2024-07-15 16:17:11.030054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.099 [2024-07-15 16:17:11.030076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.099 [2024-07-15 16:17:11.030089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.099 [2024-07-15 16:17:11.032922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.099 [2024-07-15 16:17:11.042168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.099 [2024-07-15 16:17:11.042573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.099 [2024-07-15 16:17:11.042608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.100 [2024-07-15 16:17:11.042624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.100 [2024-07-15 16:17:11.042857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.100 [2024-07-15 16:17:11.043090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.100 [2024-07-15 16:17:11.043111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.100 [2024-07-15 16:17:11.043125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.100 [2024-07-15 16:17:11.045961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.100 [2024-07-15 16:17:11.055218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.100 [2024-07-15 16:17:11.055578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.100 [2024-07-15 16:17:11.055605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.100 [2024-07-15 16:17:11.055621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.100 [2024-07-15 16:17:11.055855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.100 [2024-07-15 16:17:11.056104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.100 [2024-07-15 16:17:11.056126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.100 [2024-07-15 16:17:11.056140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.100 [2024-07-15 16:17:11.059020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.100 [2024-07-15 16:17:11.068316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.100 [2024-07-15 16:17:11.068722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.100 [2024-07-15 16:17:11.068750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.100 [2024-07-15 16:17:11.068766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.100 [2024-07-15 16:17:11.069013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.100 [2024-07-15 16:17:11.069226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.100 [2024-07-15 16:17:11.069247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.100 [2024-07-15 16:17:11.069276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.100 [2024-07-15 16:17:11.072130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.100 [2024-07-15 16:17:11.081308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.100 [2024-07-15 16:17:11.081701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.100 [2024-07-15 16:17:11.081750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.100 [2024-07-15 16:17:11.081766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.100 [2024-07-15 16:17:11.082018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.100 [2024-07-15 16:17:11.082222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.100 [2024-07-15 16:17:11.082243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.100 [2024-07-15 16:17:11.082256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.100 [2024-07-15 16:17:11.085113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.100 [2024-07-15 16:17:11.094365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.100 [2024-07-15 16:17:11.094734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.100 [2024-07-15 16:17:11.094762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.100 [2024-07-15 16:17:11.094778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.100 [2024-07-15 16:17:11.095032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.100 [2024-07-15 16:17:11.095283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.100 [2024-07-15 16:17:11.095305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.100 [2024-07-15 16:17:11.095318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.100 [2024-07-15 16:17:11.098419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.361 [2024-07-15 16:17:11.107656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.361 [2024-07-15 16:17:11.108001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.361 [2024-07-15 16:17:11.108044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.362 [2024-07-15 16:17:11.108061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.362 [2024-07-15 16:17:11.108300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.362 [2024-07-15 16:17:11.108502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.362 [2024-07-15 16:17:11.108523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.362 [2024-07-15 16:17:11.108536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.362 [2024-07-15 16:17:11.111525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.362 [2024-07-15 16:17:11.120898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.362 [2024-07-15 16:17:11.121294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.362 [2024-07-15 16:17:11.121339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.362 [2024-07-15 16:17:11.121357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.362 [2024-07-15 16:17:11.121618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.362 [2024-07-15 16:17:11.121828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.362 [2024-07-15 16:17:11.121849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.362 [2024-07-15 16:17:11.121862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.362 [2024-07-15 16:17:11.124764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.362 [2024-07-15 16:17:11.133998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.362 [2024-07-15 16:17:11.134323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.362 [2024-07-15 16:17:11.134351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.362 [2024-07-15 16:17:11.134366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.362 [2024-07-15 16:17:11.134581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.362 [2024-07-15 16:17:11.134783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.362 [2024-07-15 16:17:11.134803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.362 [2024-07-15 16:17:11.134815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.362 [2024-07-15 16:17:11.137700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.362 [2024-07-15 16:17:11.147358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.362 [2024-07-15 16:17:11.147792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.362 [2024-07-15 16:17:11.147820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.362 [2024-07-15 16:17:11.147836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.362 [2024-07-15 16:17:11.148058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.362 [2024-07-15 16:17:11.148288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.362 [2024-07-15 16:17:11.148325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.362 [2024-07-15 16:17:11.148338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.362 [2024-07-15 16:17:11.151445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.362 [2024-07-15 16:17:11.160806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.362 [2024-07-15 16:17:11.161121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.362 [2024-07-15 16:17:11.161151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.362 [2024-07-15 16:17:11.161168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.362 [2024-07-15 16:17:11.161411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.362 [2024-07-15 16:17:11.161611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.362 [2024-07-15 16:17:11.161631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.362 [2024-07-15 16:17:11.161646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.362 [2024-07-15 16:17:11.164871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.362 [2024-07-15 16:17:11.174253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.362 [2024-07-15 16:17:11.174690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.362 [2024-07-15 16:17:11.174718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.362 [2024-07-15 16:17:11.174738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.362 [2024-07-15 16:17:11.174986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.362 [2024-07-15 16:17:11.175205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.362 [2024-07-15 16:17:11.175227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.362 [2024-07-15 16:17:11.175256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.362 [2024-07-15 16:17:11.178253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.362 [2024-07-15 16:17:11.187492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.362 [2024-07-15 16:17:11.187838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.362 [2024-07-15 16:17:11.187865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.362 [2024-07-15 16:17:11.187885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.362 [2024-07-15 16:17:11.188150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.362 [2024-07-15 16:17:11.188377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.362 [2024-07-15 16:17:11.188397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.362 [2024-07-15 16:17:11.188409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.362 [2024-07-15 16:17:11.191391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.362 [2024-07-15 16:17:11.200727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.362 [2024-07-15 16:17:11.201141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.362 [2024-07-15 16:17:11.201191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.362 [2024-07-15 16:17:11.201207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.362 [2024-07-15 16:17:11.201464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.362 [2024-07-15 16:17:11.201652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.362 [2024-07-15 16:17:11.201671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.362 [2024-07-15 16:17:11.201684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.362 [2024-07-15 16:17:11.204589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.362 [2024-07-15 16:17:11.213896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.362 [2024-07-15 16:17:11.214307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.362 [2024-07-15 16:17:11.214368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.362 [2024-07-15 16:17:11.214383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.362 [2024-07-15 16:17:11.214613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.362 [2024-07-15 16:17:11.214801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.362 [2024-07-15 16:17:11.214825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.362 [2024-07-15 16:17:11.214838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.362 [2024-07-15 16:17:11.217745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.362 [2024-07-15 16:17:11.227114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.362 [2024-07-15 16:17:11.227554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.362 [2024-07-15 16:17:11.227601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.362 [2024-07-15 16:17:11.227617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.362 [2024-07-15 16:17:11.227852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.362 [2024-07-15 16:17:11.228082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.362 [2024-07-15 16:17:11.228102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.362 [2024-07-15 16:17:11.228116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.362 [2024-07-15 16:17:11.230975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.362 [2024-07-15 16:17:11.240345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.362 [2024-07-15 16:17:11.240763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.362 [2024-07-15 16:17:11.240811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.362 [2024-07-15 16:17:11.240827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.362 [2024-07-15 16:17:11.241071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.362 [2024-07-15 16:17:11.241293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.362 [2024-07-15 16:17:11.241313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.362 [2024-07-15 16:17:11.241325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.362 [2024-07-15 16:17:11.244207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.362 [2024-07-15 16:17:11.253557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.362 [2024-07-15 16:17:11.253974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.362 [2024-07-15 16:17:11.254032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.363 [2024-07-15 16:17:11.254048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.363 [2024-07-15 16:17:11.254306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.363 [2024-07-15 16:17:11.254495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.363 [2024-07-15 16:17:11.254514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.363 [2024-07-15 16:17:11.254526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.363 [2024-07-15 16:17:11.257633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.363 [2024-07-15 16:17:11.266651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.363 [2024-07-15 16:17:11.267046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.363 [2024-07-15 16:17:11.267108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.363 [2024-07-15 16:17:11.267124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.363 [2024-07-15 16:17:11.267364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.363 [2024-07-15 16:17:11.267551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.363 [2024-07-15 16:17:11.267570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.363 [2024-07-15 16:17:11.267582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.363 [2024-07-15 16:17:11.270445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.363 [2024-07-15 16:17:11.279837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.363 [2024-07-15 16:17:11.280269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.363 [2024-07-15 16:17:11.280334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.363 [2024-07-15 16:17:11.280350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.363 [2024-07-15 16:17:11.280578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.363 [2024-07-15 16:17:11.280766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.363 [2024-07-15 16:17:11.280787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.363 [2024-07-15 16:17:11.280799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.363 [2024-07-15 16:17:11.283667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.363 [2024-07-15 16:17:11.293013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.363 [2024-07-15 16:17:11.293408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.363 [2024-07-15 16:17:11.293457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.363 [2024-07-15 16:17:11.293474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.363 [2024-07-15 16:17:11.293719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.363 [2024-07-15 16:17:11.293907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.363 [2024-07-15 16:17:11.293926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.363 [2024-07-15 16:17:11.293938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.363 [2024-07-15 16:17:11.296834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.363 [2024-07-15 16:17:11.306034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.363 [2024-07-15 16:17:11.306439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.363 [2024-07-15 16:17:11.306467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.363 [2024-07-15 16:17:11.306482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.363 [2024-07-15 16:17:11.306721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.363 [2024-07-15 16:17:11.306924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.363 [2024-07-15 16:17:11.306966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.363 [2024-07-15 16:17:11.306981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.363 [2024-07-15 16:17:11.309781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.363 [2024-07-15 16:17:11.319124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.363 [2024-07-15 16:17:11.319595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.363 [2024-07-15 16:17:11.319645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.363 [2024-07-15 16:17:11.319661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.363 [2024-07-15 16:17:11.319905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.363 [2024-07-15 16:17:11.320138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.363 [2024-07-15 16:17:11.320160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.363 [2024-07-15 16:17:11.320173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.363 [2024-07-15 16:17:11.323048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.363 [2024-07-15 16:17:11.332120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.363 [2024-07-15 16:17:11.332528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.363 [2024-07-15 16:17:11.332556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.363 [2024-07-15 16:17:11.332571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.363 [2024-07-15 16:17:11.332805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.363 [2024-07-15 16:17:11.333051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.363 [2024-07-15 16:17:11.333073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.363 [2024-07-15 16:17:11.333087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.363 [2024-07-15 16:17:11.335965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.363 [2024-07-15 16:17:11.345174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.363 [2024-07-15 16:17:11.345516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.363 [2024-07-15 16:17:11.345545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.363 [2024-07-15 16:17:11.345560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.363 [2024-07-15 16:17:11.345795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.363 [2024-07-15 16:17:11.346025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.363 [2024-07-15 16:17:11.346061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.363 [2024-07-15 16:17:11.346080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.363 [2024-07-15 16:17:11.348925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.363 [2024-07-15 16:17:11.358381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.363 [2024-07-15 16:17:11.358831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.363 [2024-07-15 16:17:11.358860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.363 [2024-07-15 16:17:11.358876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.363 [2024-07-15 16:17:11.359128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.363 [2024-07-15 16:17:11.359364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.363 [2024-07-15 16:17:11.359385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.363 [2024-07-15 16:17:11.359398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.363 [2024-07-15 16:17:11.362585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.623 [2024-07-15 16:17:11.371912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.624 [2024-07-15 16:17:11.372358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.624 [2024-07-15 16:17:11.372387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.624 [2024-07-15 16:17:11.372402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.624 [2024-07-15 16:17:11.372649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.624 [2024-07-15 16:17:11.372852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.624 [2024-07-15 16:17:11.372872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.624 [2024-07-15 16:17:11.372884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.624 [2024-07-15 16:17:11.375751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.624 [2024-07-15 16:17:11.384970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.624 [2024-07-15 16:17:11.385342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.624 [2024-07-15 16:17:11.385411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.624 [2024-07-15 16:17:11.385427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.624 [2024-07-15 16:17:11.385656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.624 [2024-07-15 16:17:11.385844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.624 [2024-07-15 16:17:11.385864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.624 [2024-07-15 16:17:11.385876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.624 [2024-07-15 16:17:11.388760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.624 [2024-07-15 16:17:11.398385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.624 [2024-07-15 16:17:11.398779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.624 [2024-07-15 16:17:11.398834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.624 [2024-07-15 16:17:11.398852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.624 [2024-07-15 16:17:11.399112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.624 [2024-07-15 16:17:11.399338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.624 [2024-07-15 16:17:11.399359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.624 [2024-07-15 16:17:11.399371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.624 [2024-07-15 16:17:11.402418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.624 [2024-07-15 16:17:11.411467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.624 [2024-07-15 16:17:11.411932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.624 [2024-07-15 16:17:11.411987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.624 [2024-07-15 16:17:11.412004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.624 [2024-07-15 16:17:11.412252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.624 [2024-07-15 16:17:11.412474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.624 [2024-07-15 16:17:11.412494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.624 [2024-07-15 16:17:11.412507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.624 [2024-07-15 16:17:11.415329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.624 [2024-07-15 16:17:11.424665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.624 [2024-07-15 16:17:11.425071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.624 [2024-07-15 16:17:11.425100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.624 [2024-07-15 16:17:11.425116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.624 [2024-07-15 16:17:11.425350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.624 [2024-07-15 16:17:11.425553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.624 [2024-07-15 16:17:11.425573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.624 [2024-07-15 16:17:11.425586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.624 [2024-07-15 16:17:11.428472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.624 [2024-07-15 16:17:11.437828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.624 [2024-07-15 16:17:11.438250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.624 [2024-07-15 16:17:11.438301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.624 [2024-07-15 16:17:11.438317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.624 [2024-07-15 16:17:11.438558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.624 [2024-07-15 16:17:11.438750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.624 [2024-07-15 16:17:11.438770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.624 [2024-07-15 16:17:11.438783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.624 [2024-07-15 16:17:11.441691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.624 [2024-07-15 16:17:11.450943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.624 [2024-07-15 16:17:11.451356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.624 [2024-07-15 16:17:11.451384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.624 [2024-07-15 16:17:11.451399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.624 [2024-07-15 16:17:11.451633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.624 [2024-07-15 16:17:11.451835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.624 [2024-07-15 16:17:11.451855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.624 [2024-07-15 16:17:11.451868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.624 [2024-07-15 16:17:11.454756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.624 [2024-07-15 16:17:11.463952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.624 [2024-07-15 16:17:11.464324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.624 [2024-07-15 16:17:11.464352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.624 [2024-07-15 16:17:11.464368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.624 [2024-07-15 16:17:11.464603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.624 [2024-07-15 16:17:11.464805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.624 [2024-07-15 16:17:11.464825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.624 [2024-07-15 16:17:11.464837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.624 [2024-07-15 16:17:11.467725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.624 [2024-07-15 16:17:11.477049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.624 [2024-07-15 16:17:11.477422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.624 [2024-07-15 16:17:11.477449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.624 [2024-07-15 16:17:11.477464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.624 [2024-07-15 16:17:11.477682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.624 [2024-07-15 16:17:11.477885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.624 [2024-07-15 16:17:11.477906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.624 [2024-07-15 16:17:11.477918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.624 [2024-07-15 16:17:11.480808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.624 [2024-07-15 16:17:11.490060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.624 [2024-07-15 16:17:11.490401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.624 [2024-07-15 16:17:11.490428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.624 [2024-07-15 16:17:11.490443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.624 [2024-07-15 16:17:11.490673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.624 [2024-07-15 16:17:11.490876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.624 [2024-07-15 16:17:11.490896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.624 [2024-07-15 16:17:11.490909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.624 [2024-07-15 16:17:11.493797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.624 [2024-07-15 16:17:11.503165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.624 [2024-07-15 16:17:11.503533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.624 [2024-07-15 16:17:11.503560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.624 [2024-07-15 16:17:11.503575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.624 [2024-07-15 16:17:11.503789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.624 [2024-07-15 16:17:11.504020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.624 [2024-07-15 16:17:11.504043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.625 [2024-07-15 16:17:11.504056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.625 [2024-07-15 16:17:11.506901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.625 [2024-07-15 16:17:11.516263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.625 [2024-07-15 16:17:11.516665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.625 [2024-07-15 16:17:11.516692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.625 [2024-07-15 16:17:11.516707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.625 [2024-07-15 16:17:11.516935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.625 [2024-07-15 16:17:11.517138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.625 [2024-07-15 16:17:11.517157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.625 [2024-07-15 16:17:11.517171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.625 [2024-07-15 16:17:11.520026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.625 [2024-07-15 16:17:11.529272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.625 [2024-07-15 16:17:11.529689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.625 [2024-07-15 16:17:11.529717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.625 [2024-07-15 16:17:11.529737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.625 [2024-07-15 16:17:11.529985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.625 [2024-07-15 16:17:11.530198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.625 [2024-07-15 16:17:11.530219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.625 [2024-07-15 16:17:11.530233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.625 [2024-07-15 16:17:11.533107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.625 [2024-07-15 16:17:11.542364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.625 [2024-07-15 16:17:11.542769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.625 [2024-07-15 16:17:11.542796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.625 [2024-07-15 16:17:11.542812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.625 [2024-07-15 16:17:11.543058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.625 [2024-07-15 16:17:11.543266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.625 [2024-07-15 16:17:11.543301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.625 [2024-07-15 16:17:11.543313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.625 [2024-07-15 16:17:11.546198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.625 [2024-07-15 16:17:11.555420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.625 [2024-07-15 16:17:11.555825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.625 [2024-07-15 16:17:11.555852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.625 [2024-07-15 16:17:11.555867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.625 [2024-07-15 16:17:11.556129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.625 [2024-07-15 16:17:11.556356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.625 [2024-07-15 16:17:11.556377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.625 [2024-07-15 16:17:11.556389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.625 [2024-07-15 16:17:11.559230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 884186 Killed "${NVMF_APP[@]}" "$@" 00:24:25.625 16:17:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:25.625 16:17:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:25.625 16:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:25.625 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:25.625 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:25.625 [2024-07-15 16:17:11.568570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.625 [2024-07-15 16:17:11.568976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.625 [2024-07-15 16:17:11.569012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.625 [2024-07-15 16:17:11.569029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.625 [2024-07-15 16:17:11.569256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.625 [2024-07-15 16:17:11.569473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.625 [2024-07-15 16:17:11.569494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.625 [2024-07-15 16:17:11.569508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.625 16:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=885261 00:24:25.625 16:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:25.625 16:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 885261 00:24:25.625 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 885261 ']' 00:24:25.625 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.625 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:25.625 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.625 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:25.625 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:25.625 [2024-07-15 16:17:11.572729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.625 [2024-07-15 16:17:11.581991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.625 [2024-07-15 16:17:11.582392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.625 [2024-07-15 16:17:11.582420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.625 [2024-07-15 16:17:11.582435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.625 [2024-07-15 16:17:11.582650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.625 [2024-07-15 16:17:11.582858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.625 [2024-07-15 16:17:11.582879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.625 [2024-07-15 16:17:11.582893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.625 [2024-07-15 16:17:11.585991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.625 [2024-07-15 16:17:11.595463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.625 [2024-07-15 16:17:11.595878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.625 [2024-07-15 16:17:11.595906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.625 [2024-07-15 16:17:11.595923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.625 [2024-07-15 16:17:11.596159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.625 [2024-07-15 16:17:11.596391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.625 [2024-07-15 16:17:11.596411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.625 [2024-07-15 16:17:11.596429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.625 [2024-07-15 16:17:11.599433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.625 [2024-07-15 16:17:11.608828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.625 [2024-07-15 16:17:11.609162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.625 [2024-07-15 16:17:11.609191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.625 [2024-07-15 16:17:11.609208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.625 [2024-07-15 16:17:11.609430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.625 [2024-07-15 16:17:11.609639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.625 [2024-07-15 16:17:11.609659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.625 [2024-07-15 16:17:11.609672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.625 [2024-07-15 16:17:11.612975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.625 [2024-07-15 16:17:11.615050] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:24:25.625 [2024-07-15 16:17:11.615108] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.625 [2024-07-15 16:17:11.622151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.625 [2024-07-15 16:17:11.622592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.625 [2024-07-15 16:17:11.622628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.625 [2024-07-15 16:17:11.622644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.625 [2024-07-15 16:17:11.622896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.625 [2024-07-15 16:17:11.623163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.625 [2024-07-15 16:17:11.623185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.625 [2024-07-15 16:17:11.623199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.886 [2024-07-15 16:17:11.626565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.886 [2024-07-15 16:17:11.635542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.886 [2024-07-15 16:17:11.635968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.886 [2024-07-15 16:17:11.635996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.886 [2024-07-15 16:17:11.636015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.886 [2024-07-15 16:17:11.636258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.886 [2024-07-15 16:17:11.636466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.886 [2024-07-15 16:17:11.636486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.886 [2024-07-15 16:17:11.636504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.887 [2024-07-15 16:17:11.639633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.887 [2024-07-15 16:17:11.648857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.887 [2024-07-15 16:17:11.649308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.887 [2024-07-15 16:17:11.649336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.887 [2024-07-15 16:17:11.649361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.887 [2024-07-15 16:17:11.649594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.887 [2024-07-15 16:17:11.649803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.887 [2024-07-15 16:17:11.649822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.887 [2024-07-15 16:17:11.649835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.887 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.887 [2024-07-15 16:17:11.652838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.887 [2024-07-15 16:17:11.662331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.887 [2024-07-15 16:17:11.662694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.887 [2024-07-15 16:17:11.662722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.887 [2024-07-15 16:17:11.662738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.887 [2024-07-15 16:17:11.662991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.887 [2024-07-15 16:17:11.663218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.887 [2024-07-15 16:17:11.663239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.887 [2024-07-15 16:17:11.663261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.887 [2024-07-15 16:17:11.666465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.887 [2024-07-15 16:17:11.675653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.887 [2024-07-15 16:17:11.676056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.887 [2024-07-15 16:17:11.676085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.887 [2024-07-15 16:17:11.676101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.887 [2024-07-15 16:17:11.676331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.887 [2024-07-15 16:17:11.676546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.887 [2024-07-15 16:17:11.676566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.887 [2024-07-15 16:17:11.676580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.887 [2024-07-15 16:17:11.679648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.887 [2024-07-15 16:17:11.683038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:25.887 [2024-07-15 16:17:11.689051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.887 [2024-07-15 16:17:11.689545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.887 [2024-07-15 16:17:11.689587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.887 [2024-07-15 16:17:11.689605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.887 [2024-07-15 16:17:11.689848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.887 [2024-07-15 16:17:11.690080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.887 [2024-07-15 16:17:11.690103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.887 [2024-07-15 16:17:11.690119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.887 [2024-07-15 16:17:11.693222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.887 [2024-07-15 16:17:11.702417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.887 [2024-07-15 16:17:11.702892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.887 [2024-07-15 16:17:11.702940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.887 [2024-07-15 16:17:11.702966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.887 [2024-07-15 16:17:11.703214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.887 [2024-07-15 16:17:11.703426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.887 [2024-07-15 16:17:11.703446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.887 [2024-07-15 16:17:11.703460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.887 [2024-07-15 16:17:11.706478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.887 [2024-07-15 16:17:11.715830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.887 [2024-07-15 16:17:11.716195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.887 [2024-07-15 16:17:11.716225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.887 [2024-07-15 16:17:11.716250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.887 [2024-07-15 16:17:11.716514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.887 [2024-07-15 16:17:11.716707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.887 [2024-07-15 16:17:11.716727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.887 [2024-07-15 16:17:11.716742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.887 [2024-07-15 16:17:11.719810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.887 [2024-07-15 16:17:11.729263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.887 [2024-07-15 16:17:11.729634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.887 [2024-07-15 16:17:11.729670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.887 [2024-07-15 16:17:11.729685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.887 [2024-07-15 16:17:11.729922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.887 [2024-07-15 16:17:11.730159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.887 [2024-07-15 16:17:11.730183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.887 [2024-07-15 16:17:11.730198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.887 [2024-07-15 16:17:11.733278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.887 [2024-07-15 16:17:11.742623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.887 [2024-07-15 16:17:11.743067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.887 [2024-07-15 16:17:11.743112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.887 [2024-07-15 16:17:11.743131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.887 [2024-07-15 16:17:11.743379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.887 [2024-07-15 16:17:11.743575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.887 [2024-07-15 16:17:11.743594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.887 [2024-07-15 16:17:11.743610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.887 [2024-07-15 16:17:11.746681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.887 [2024-07-15 16:17:11.756054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.887 [2024-07-15 16:17:11.756559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.887 [2024-07-15 16:17:11.756604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.887 [2024-07-15 16:17:11.756621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.887 [2024-07-15 16:17:11.756860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.887 [2024-07-15 16:17:11.757103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.887 [2024-07-15 16:17:11.757125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.887 [2024-07-15 16:17:11.757140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.887 [2024-07-15 16:17:11.760207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.887 [2024-07-15 16:17:11.769289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.887 [2024-07-15 16:17:11.769718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.887 [2024-07-15 16:17:11.769746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.887 [2024-07-15 16:17:11.769762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.887 [2024-07-15 16:17:11.770009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.887 [2024-07-15 16:17:11.770230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.887 [2024-07-15 16:17:11.770252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.887 [2024-07-15 16:17:11.770277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.887 [2024-07-15 16:17:11.773219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.887 [2024-07-15 16:17:11.782450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.887 [2024-07-15 16:17:11.782805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.887 [2024-07-15 16:17:11.782842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.887 [2024-07-15 16:17:11.782859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.887 [2024-07-15 16:17:11.783098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.887 [2024-07-15 16:17:11.783336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.888 [2024-07-15 16:17:11.783356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.888 [2024-07-15 16:17:11.783370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.888 [2024-07-15 16:17:11.786314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.888 [2024-07-15 16:17:11.791048] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.888 [2024-07-15 16:17:11.791080] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.888 [2024-07-15 16:17:11.791103] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.888 [2024-07-15 16:17:11.791114] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.888 [2024-07-15 16:17:11.791124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.888 [2024-07-15 16:17:11.791184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:25.888 [2024-07-15 16:17:11.791259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:25.888 [2024-07-15 16:17:11.791276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.888 [2024-07-15 16:17:11.795799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.888 [2024-07-15 16:17:11.796227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.888 [2024-07-15 16:17:11.796270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.888 [2024-07-15 16:17:11.796288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.888 [2024-07-15 16:17:11.796536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.888 [2024-07-15 16:17:11.796743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.888 [2024-07-15 16:17:11.796764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.888 [2024-07-15 16:17:11.796780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.888 [2024-07-15 16:17:11.799911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.888 [2024-07-15 16:17:11.809253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.888 [2024-07-15 16:17:11.809784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.888 [2024-07-15 16:17:11.809832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.888 [2024-07-15 16:17:11.809851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.888 [2024-07-15 16:17:11.810108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.888 [2024-07-15 16:17:11.810338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.888 [2024-07-15 16:17:11.810360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.888 [2024-07-15 16:17:11.810376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.888 [2024-07-15 16:17:11.813469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.888 [2024-07-15 16:17:11.822806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.888 [2024-07-15 16:17:11.823352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.888 [2024-07-15 16:17:11.823398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.888 [2024-07-15 16:17:11.823417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.888 [2024-07-15 16:17:11.823673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.888 [2024-07-15 16:17:11.823881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.888 [2024-07-15 16:17:11.823902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.888 [2024-07-15 16:17:11.823919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.888 [2024-07-15 16:17:11.827108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.888 [2024-07-15 16:17:11.836244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.888 [2024-07-15 16:17:11.836814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.888 [2024-07-15 16:17:11.836859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.888 [2024-07-15 16:17:11.836879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.888 [2024-07-15 16:17:11.837127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.888 [2024-07-15 16:17:11.837356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.888 [2024-07-15 16:17:11.837379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.888 [2024-07-15 16:17:11.837395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.888 [2024-07-15 16:17:11.840555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.888 [2024-07-15 16:17:11.849856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.888 [2024-07-15 16:17:11.850325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.888 [2024-07-15 16:17:11.850362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.888 [2024-07-15 16:17:11.850382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.888 [2024-07-15 16:17:11.850616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.888 [2024-07-15 16:17:11.850839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.888 [2024-07-15 16:17:11.850861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.888 [2024-07-15 16:17:11.850887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.888 [2024-07-15 16:17:11.854045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.888 [2024-07-15 16:17:11.863363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.888 [2024-07-15 16:17:11.863884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.888 [2024-07-15 16:17:11.863929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.888 [2024-07-15 16:17:11.863948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.888 [2024-07-15 16:17:11.864181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.888 [2024-07-15 16:17:11.864408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.888 [2024-07-15 16:17:11.864430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.888 [2024-07-15 16:17:11.864447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.888 [2024-07-15 16:17:11.867721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.888 [2024-07-15 16:17:11.876883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.888 [2024-07-15 16:17:11.877286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.888 [2024-07-15 16:17:11.877320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:25.888 [2024-07-15 16:17:11.877338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:25.888 [2024-07-15 16:17:11.877571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:25.888 [2024-07-15 16:17:11.877794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.888 [2024-07-15 16:17:11.877815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.888 [2024-07-15 16:17:11.877830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.888 [2024-07-15 16:17:11.881056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.147 [2024-07-15 16:17:11.890587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.147 [2024-07-15 16:17:11.891013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.147 [2024-07-15 16:17:11.891042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:26.147 [2024-07-15 16:17:11.891058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:26.147 [2024-07-15 16:17:11.891273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:26.147 [2024-07-15 16:17:11.891501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.147 [2024-07-15 16:17:11.891523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.147 [2024-07-15 16:17:11.891538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.147 [2024-07-15 16:17:11.894793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.147 [2024-07-15 16:17:11.904162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.147 [2024-07-15 16:17:11.904540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.147 [2024-07-15 16:17:11.904577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:26.147 [2024-07-15 16:17:11.904594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:26.147 [2024-07-15 16:17:11.904823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:26.147 [2024-07-15 16:17:11.905066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.147 [2024-07-15 16:17:11.905089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.147 [2024-07-15 16:17:11.905104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.147 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:26.147 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:26.147 16:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:26.147 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:26.147 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:26.147 [2024-07-15 16:17:11.908352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.147 [2024-07-15 16:17:11.917630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.147 [2024-07-15 16:17:11.918017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.147 [2024-07-15 16:17:11.918046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:26.147 [2024-07-15 16:17:11.918062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:26.147 [2024-07-15 16:17:11.918289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:26.147 [2024-07-15 16:17:11.918510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.147 [2024-07-15 16:17:11.918531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.147 [2024-07-15 16:17:11.918544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.147 [2024-07-15 16:17:11.921741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.147 16:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.147 16:17:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:26.147 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.147 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:26.147 [2024-07-15 16:17:11.931268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.147 [2024-07-15 16:17:11.931449] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.147 [2024-07-15 16:17:11.931630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.147 [2024-07-15 16:17:11.931659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:26.147 [2024-07-15 16:17:11.931675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:26.147 [2024-07-15 16:17:11.931907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:26.147 [2024-07-15 16:17:11.932147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.147 [2024-07-15 16:17:11.932170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.147 [2024-07-15 16:17:11.932190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.147 [2024-07-15 16:17:11.935435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.147 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.147 16:17:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:26.147 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.147 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:26.147 [2024-07-15 16:17:11.944764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.147 [2024-07-15 16:17:11.945118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.147 [2024-07-15 16:17:11.945146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:26.147 [2024-07-15 16:17:11.945163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:26.147 [2024-07-15 16:17:11.945404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:26.147 [2024-07-15 16:17:11.945619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.147 [2024-07-15 16:17:11.945640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.147 [2024-07-15 16:17:11.945653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.147 [2024-07-15 16:17:11.948815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.147 [2024-07-15 16:17:11.958362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.147 [2024-07-15 16:17:11.958762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.147 [2024-07-15 16:17:11.958791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:26.147 [2024-07-15 16:17:11.958808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:26.147 [2024-07-15 16:17:11.959051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:26.147 [2024-07-15 16:17:11.959264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.147 [2024-07-15 16:17:11.959286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.147 [2024-07-15 16:17:11.959300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.147 [2024-07-15 16:17:11.962526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.147 [2024-07-15 16:17:11.971833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.147 [2024-07-15 16:17:11.972409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.147 [2024-07-15 16:17:11.972448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:26.147 [2024-07-15 16:17:11.972467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:26.147 [2024-07-15 16:17:11.972713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:26.147 [2024-07-15 16:17:11.972920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.148 [2024-07-15 16:17:11.972941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.148 [2024-07-15 16:17:11.972979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.148 Malloc0 00:24:26.148 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.148 16:17:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:26.148 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.148 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:26.148 [2024-07-15 16:17:11.976228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.148 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.148 16:17:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:26.148 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.148 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:26.148 [2024-07-15 16:17:11.985535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.148 [2024-07-15 16:17:11.985948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.148 [2024-07-15 16:17:11.985983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:24:26.148 [2024-07-15 16:17:11.985999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(5) to be set 00:24:26.148 [2024-07-15 16:17:11.986213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:24:26.148 [2024-07-15 16:17:11.986451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.148 [2024-07-15 16:17:11.986472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.148 [2024-07-15 16:17:11.986486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.148 [2024-07-15 16:17:11.989766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.148 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.148 16:17:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.148 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.148 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:26.148 [2024-07-15 16:17:11.993798] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.148 16:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.148 16:17:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 884497 00:24:26.148 [2024-07-15 16:17:11.999125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.148 [2024-07-15 16:17:12.072048] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:36.123 00:24:36.123 Latency(us) 00:24:36.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.123 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:36.123 Verification LBA range: start 0x0 length 0x4000 00:24:36.123 Nvme1n1 : 15.01 6532.76 25.52 10373.81 0.00 7548.20 885.95 15437.37 00:24:36.123 =================================================================================================================== 00:24:36.123 Total : 6532.76 25.52 10373.81 0.00 7548.20 885.95 15437.37 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:36.123 rmmod nvme_tcp 00:24:36.123 rmmod nvme_fabrics 00:24:36.123 rmmod nvme_keyring 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 885261 ']' 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 885261 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 885261 ']' 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 885261 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 885261 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 885261' 00:24:36.123 killing process with pid 885261 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 885261 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 885261 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.123 16:17:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.027 16:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:38.027 00:24:38.027 real 0m23.410s 00:24:38.027 user 1m2.237s 00:24:38.027 sys 0m4.727s 00:24:38.027 16:17:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:38.027 16:17:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:38.027 ************************************ 00:24:38.027 END TEST nvmf_bdevperf 00:24:38.027 ************************************ 00:24:38.027 16:17:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:38.027 16:17:23 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:38.027 16:17:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:38.027 16:17:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:38.027 16:17:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:38.027 ************************************ 00:24:38.027 START TEST nvmf_target_disconnect 00:24:38.027 ************************************ 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:38.027 * Looking for test storage... 00:24:38.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:38.027 16:17:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.953 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:39.954 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:39.954 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:39.954 Found net devices under 0000:09:00.0: cvl_0_0 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:39.954 Found net devices under 0000:09:00.1: cvl_0_1 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:39.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:24:39.954 00:24:39.954 --- 10.0.0.2 ping statistics --- 00:24:39.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.954 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:24:39.954 00:24:39.954 --- 10.0.0.1 ping statistics --- 00:24:39.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.954 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:39.954 ************************************ 00:24:39.954 START TEST nvmf_target_disconnect_tc1 00:24:39.954 ************************************ 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:39.954 16:17:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:40.213 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.213 [2024-07-15 16:17:26.014342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.213 [2024-07-15 16:17:26.014430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be21a0 with addr=10.0.0.2, port=4420 00:24:40.213 [2024-07-15 16:17:26.014464] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:40.213 [2024-07-15 16:17:26.014493] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:40.213 [2024-07-15 16:17:26.014506] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:40.213 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:40.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:40.213 Initializing NVMe Controllers 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:40.213 00:24:40.213 real 0m0.095s 00:24:40.213 user 0m0.034s 00:24:40.213 sys 0m0.061s 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:40.213 ************************************ 00:24:40.213 END TEST nvmf_target_disconnect_tc1 00:24:40.213 ************************************ 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:40.213 ************************************ 00:24:40.213 START TEST nvmf_target_disconnect_tc2 00:24:40.213 ************************************ 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=888418 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 888418 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 888418 ']' 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:40.213 16:17:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:40.213 [2024-07-15 16:17:26.128640] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:24:40.213 [2024-07-15 16:17:26.128727] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.213 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.213 [2024-07-15 16:17:26.202135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:40.472 [2024-07-15 16:17:26.334379] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.472 [2024-07-15 16:17:26.334456] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.472 [2024-07-15 16:17:26.334479] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.472 [2024-07-15 16:17:26.334495] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.472 [2024-07-15 16:17:26.334509] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.472 [2024-07-15 16:17:26.334603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:40.472 [2024-07-15 16:17:26.334667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:40.472 [2024-07-15 16:17:26.334730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:40.472 [2024-07-15 16:17:26.334737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:41.411 Malloc0 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:41.411 [2024-07-15 16:17:27.166199] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:41.411 [2024-07-15 16:17:27.194492] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:41.411 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.412 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:41.412 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.412 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=888571 00:24:41.412 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:41.412 16:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:41.412 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.322 16:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 888418 00:24:43.322 16:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 [2024-07-15 16:17:29.220347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 [2024-07-15 16:17:29.220673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Write completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.322 starting I/O failed 00:24:43.322 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 [2024-07-15 16:17:29.220991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Write completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 Read completed with error (sct=0, sc=8) 00:24:43.323 starting I/O failed 00:24:43.323 [2024-07-15 16:17:29.221314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.323 [2024-07-15 16:17:29.221500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.221533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.221666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.221694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.221842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.221869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.221994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.222031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.222134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.222163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.222330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.222356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.222469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.222494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.222591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.222618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.222698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.222724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.222817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.222856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.222986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.223036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.223141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.223170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.223301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.223329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.223461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.223486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.223578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.223603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.223693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.223719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.223830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.223855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.223980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.224008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.224119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.224145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.224238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.224275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.224359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.224384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.224498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.224524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.224612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.224639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.224752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.224778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.224888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.224913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.225048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.225076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.225198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.225225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.225349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.225376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.225484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.225510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.225603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.225629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.225722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.225747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.225855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.225881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.225999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.226025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.323 qpair failed and we were unable to recover it. 00:24:43.323 [2024-07-15 16:17:29.226144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.323 [2024-07-15 16:17:29.226170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.226319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.226346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.226455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.226481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.226596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.226621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.226724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.226753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.226834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.226859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.227003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.227029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.227116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.227142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.227238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.227265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.227357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.227383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.227500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.227525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.227634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.227662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.227770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.227796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.227909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.227936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.228087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.228113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.228222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.228249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.228363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.228390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.228498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.228536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.228630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.228669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.228792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.228819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.228966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.228994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.229137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.229163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.229262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.229288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.229437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.229464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.229585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.229616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.229734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.229762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.229844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.229870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.229971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.230006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.230120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.230146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.230287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.230313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.230434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.230462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.230613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.230667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.230818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.230845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.230936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.230968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.231090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.231116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.231197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.231232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.231346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.231374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.231485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.231511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.231617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.231643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.231760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.231787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.231901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.231928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.232063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.232103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.232189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.232216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.232307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.232334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.232449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.232476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.232589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.232616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.232705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.232732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.232815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.232841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.232922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.232948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.233045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.233072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.233164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.233190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.233297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.233328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.233443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.233470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.233612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.233641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.233732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.233766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.233881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.233910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.234007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.234034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.234149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.234175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.234286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.234312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.234425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.234451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.234542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.234567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.234678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.234704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.234790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.234817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.234894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.234920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.235045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.235071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.235163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.235189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.235332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.235359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.235474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.235500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.235581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.324 [2024-07-15 16:17:29.235607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.324 qpair failed and we were unable to recover it. 00:24:43.324 [2024-07-15 16:17:29.235694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.235723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.235813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.235839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.235931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.235968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.236083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.236109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.236237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.236277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.236392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.236420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.236507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.236535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.236623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.236649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.236746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.236775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.236869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.236901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.237033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.237060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.237176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.237203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.237310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.237336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.237446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.237473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.237590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.237626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.237745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.237772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.237895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.237922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.238059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.238086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.238199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.238225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.238336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.238362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.238477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.238503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.238592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.238618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.238717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.238756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.238885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.238912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.239013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.239041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.239159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.239185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.239298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.239324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.239419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.239445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.239532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.239559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.239657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.239685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.239801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.239826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.239908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.239935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.240062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.240089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.240279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.240305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.240423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.240449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.240566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.240595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.240735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.240762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.240849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.240875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.240993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.241021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.241103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.241130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.241249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.241276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.241389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.241414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.241513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.241540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.241660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.241686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.241791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.241817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.241931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.241964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.242102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.242129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.242223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.242249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.242356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.242382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.242464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.242490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.242647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.242687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.242772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.242797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.242914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.242940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.243043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.243070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.243163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.243190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.243278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.243304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.243412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.243438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.243552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.243579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.243691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.243717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.243805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.243831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.243914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.243941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.244071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.244110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.244327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.244367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.244472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.244499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.244617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.244644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.244734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.244761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.244841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.325 [2024-07-15 16:17:29.244877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.325 qpair failed and we were unable to recover it. 00:24:43.325 [2024-07-15 16:17:29.245010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.245038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.245138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.245164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.245277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.245304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.245419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.245446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.245527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.245554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.245690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.245716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.245856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.245883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.246005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.246035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.246152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.246179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.246282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.246309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.246407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.246435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.246551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.246577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.246696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.246724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.246841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.246867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.246993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.247022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.247113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.247141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.247319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.247345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.247460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.247486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.247572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.247598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.247716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.247742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.247860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.247887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.248006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.248033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.248150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.248177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.248294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.248320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.248436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.248463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.248580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.248605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.248749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.248775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.248867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.248892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.249013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.249040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.249181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.249208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.249319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.249354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.249475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.249503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.249644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.249670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.249780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.249806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.249903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.249929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.250030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.250056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.250198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.250232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.250351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.250376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.250497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.250524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.250643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.250670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.250773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.250799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.250914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.250941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.251075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.251101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.251188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.251214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.251331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.251357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.251476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.251502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.251589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.251619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.251741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.251767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.251889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.251914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.252051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.252078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.252168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.252193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.252329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.252357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.252451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.252476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.252561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.252587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.252727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.252752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.252865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.252890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.253028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.253069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.253197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.253226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.253348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.253376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.253486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.253513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.253633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.253661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.253789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.253828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.253963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.253992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.254103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.254143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.254238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.254267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.254359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.254386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.254529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.254555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.254638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.254664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.254752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.254792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.254916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.254943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.255086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.255113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.255203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.255229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.255395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.255450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.255543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.255569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.255663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.255690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.326 [2024-07-15 16:17:29.255774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.326 [2024-07-15 16:17:29.255800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.326 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.255889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.255920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.256021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.256048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.256198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.256225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.256362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.256389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.256472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.256498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.256592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.256619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.256762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.256789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.256909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.256937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.257033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.257060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.257144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.257170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.257258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.257284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.257403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.257430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.257516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.257543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.257637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.257663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.257759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.257785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.257900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.257925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.258020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.258049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.258167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.258193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.258308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.258335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.258417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.258444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.258569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.258594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.258709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.258736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.258853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.258879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.258964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.258990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.259135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.259161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.259274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.259301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.259395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.259420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.259532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.259563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.259703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.259729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.259845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.259871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.259965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.259990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.260144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.260171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.260251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.260276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.260394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.260421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.260557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.260584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.260692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.260718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.260803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.260829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.260968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.261007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.261129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.261158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.261324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.261386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.261543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.261596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.261756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.261808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.261970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.261997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.262082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.262109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.262199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.262226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.262335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.262361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.262557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.262613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.262733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.262759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.262871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.262898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.262997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.263024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.263110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.263137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.263226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.263253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.263362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.263388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.263495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.327 [2024-07-15 16:17:29.263521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.327 qpair failed and we were unable to recover it. 00:24:43.327 [2024-07-15 16:17:29.263633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.263663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.263752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.263778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.263865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.263891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.264008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.264035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.264114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.264141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.264283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.264310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.264453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.264479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.264600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.264628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.264736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.264763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.264847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.264874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.264969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.264997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.265115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.265141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.265257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.265283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.265399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.265425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.265524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.265551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.265670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.265696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.265814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.265840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.265983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.328 [2024-07-15 16:17:29.266009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.328 qpair failed and we were unable to recover it. 00:24:43.328 [2024-07-15 16:17:29.266117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.266143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.266232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.266259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.266357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.266383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.266475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.266501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.266590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.266616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.266731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.266759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.266855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.266881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.266965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.266992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.267071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.267097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.267216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.267246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.267333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.267359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.267478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.267504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.267611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.267637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.267747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.267773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.267926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.267971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.268096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.268124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.268222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.268248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.268333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.268361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.268476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.268503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.268624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.268663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.268781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.268808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.268928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.268963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.269084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.269111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.269194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.269220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.269312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.269338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.269456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.269483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.269569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.269595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.269688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.269715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.269830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.269859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.269975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.270002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.270121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.270148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.270260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.270285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.270399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.270425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.270534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.270560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.270643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.270669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.270819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.270858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.270961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.270995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.271109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.271135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.271314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.271364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.271505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.271559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.329 qpair failed and we were unable to recover it. 00:24:43.329 [2024-07-15 16:17:29.271669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.329 [2024-07-15 16:17:29.271721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.271825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.271851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.271991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.272018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.272135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.272161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.272275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.272302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.272416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.272441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.272524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.272548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.272684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.272710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.272803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.272828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.272934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.272989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.273120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.273148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.273287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.273313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.273430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.273457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.273570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.273596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.273688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.273716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.273834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.273861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.273945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.273982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.274100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.274127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.274213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.274239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.274352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.274378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.274473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.274499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.274586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.274613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.274711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.274750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.274857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.274898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.275030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.275058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.275176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.275203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.275295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.275322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.275437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.275464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.275575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.275600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.275689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.275714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.275829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.275857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.275972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.276000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.276100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.276127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.276241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.276267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.276383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.276410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.276573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.276626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.276734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.276761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.276848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.276874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.276950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.276983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.277093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.277119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.277232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.330 [2024-07-15 16:17:29.277260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.330 qpair failed and we were unable to recover it. 00:24:43.330 [2024-07-15 16:17:29.277375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.277401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.277482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.277509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.277648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.277673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.277761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.277787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.277905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.277931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.278039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.278067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.278188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.278215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.278330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.278357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.278434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.278461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.278561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.278600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.278722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.278750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.278868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.278895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.279014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.279042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.279128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.279155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.279266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.279292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.279434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.279461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.279579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.279605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.279692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.279718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.279855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.279881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.279998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.280025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.280117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.280143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.280258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.280285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.280427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.280457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.280550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.280576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.280668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.280695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.280774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.280800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.280910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.280936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.281052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.281078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.281164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.281190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.281284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.281311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.281453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.281491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.281631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.281658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.281780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.281807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.281945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.281979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.282103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.282130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.331 qpair failed and we were unable to recover it. 00:24:43.331 [2024-07-15 16:17:29.282225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.331 [2024-07-15 16:17:29.282251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.282384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.282410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.282531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.282558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.282649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.282676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.282763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.282787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.282903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.282930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.283086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.283113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.283234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.283263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.283377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.283405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.283535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.283562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.283692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.283719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.283813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.283839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.283928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.283961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.284054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.284082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.284183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.284224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.284347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.284377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.284504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.284532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.284619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.284645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.284738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.284771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.284884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.284911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.285013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.285040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.285176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.285215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.285308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.285336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.285498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.285549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.285640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.285666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.285755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.285782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.285873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.285901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.286000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.286030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.286137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.286164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.286253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.286281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.286400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.286428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.286596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.286667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.286938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.286972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.287114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.287141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.287229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.287256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.287377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.287404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.287497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.287525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.287689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.287736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.287919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.287978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.288124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.288151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.288260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.288286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.288384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.288411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.332 qpair failed and we were unable to recover it. 00:24:43.332 [2024-07-15 16:17:29.288650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.332 [2024-07-15 16:17:29.288704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.288792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.288818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.288902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.288929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.289081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.289109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.289222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.289249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.289354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.289382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.289495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.289521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.289638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.289664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.289767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.289807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.289928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.289960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.290102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.290129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.290244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.290270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.290367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.290398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.290491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.290517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.290597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.290623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.290765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.290806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.290910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.290938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.291057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.291085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.291169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.291195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.291278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.291304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.291449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.291476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.291596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.291622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.291714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.291741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.291860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.291889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.292013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.292040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.292157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.292184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.292300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.292326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.292536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.292562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.292648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.292675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.292790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.292816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.292902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.292929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.293065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.293092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.293212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.293239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.293322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.293349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.293469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.293495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.293582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.293608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.293748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.293774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.293890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.293917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.294010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.294037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.294157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.294184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.294271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.294297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.294432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.294460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.294605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.294643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.294762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.294789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.294885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.294912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.295000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.295027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.295122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.295148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.295267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.295293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.295440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.295475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.295618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.295645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.295761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.295789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.295907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.295935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.296091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.296122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.296212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.296239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.296354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.296381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.296493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.296520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.296639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.296665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.296785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.296811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.296921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.296948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.297074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.297101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.297217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.297243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.297328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.297355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.297440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.297468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.297561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.297588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.297729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.297755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.297841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.297868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.297993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.298021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.298234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.298274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.298394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.298422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.298520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.298547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.333 qpair failed and we were unable to recover it. 00:24:43.333 [2024-07-15 16:17:29.298625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.333 [2024-07-15 16:17:29.298652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.298764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.298791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.298934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.298967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.299083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.299110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.299196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.299224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.299307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.299333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.299450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.299476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.299568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.299594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.299728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.299755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.299852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.299880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.300001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.300028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.300116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.300144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.300231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.300258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.300369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.300395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.300504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.300531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.300620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.300647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.300788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.300814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.300929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.300972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.301063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.301091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.301208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.301235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.301346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.301373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.301488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.301515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.301628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.301659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.301747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.301773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.301914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.301943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.302060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.302099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.302188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.302216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.302308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.302335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.302419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.302446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.302661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.302688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.302803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.302829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.302911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.302937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.303077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.303118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.303253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.303281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.303424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.303452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.303600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.303631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.303733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.303760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.303886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.303927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.304032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.304061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.304261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.304288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.304446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.304497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.304636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.304692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.304809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.304836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.304960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.304988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.305126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.305153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.305270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.305297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.305459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.305510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.305619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.305646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.305725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.305752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.305868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.305901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.305994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.306022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.306117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.306144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.306266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.306305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.306425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.306454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.306551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.306577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.306721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.306747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.306885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.306925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.307024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.307053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.307143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.307170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.307317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.307344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.307424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.307450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.307546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.307572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.307724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.307780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.307928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.307961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.308081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.308107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.308195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.308222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.308304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.308331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.308504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.308557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.308645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.308671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.308782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.308808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.308915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.308941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.309097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.309126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.309227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.309267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.309389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.309419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.309509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.309537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.309632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.334 [2024-07-15 16:17:29.309661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.334 qpair failed and we were unable to recover it. 00:24:43.334 [2024-07-15 16:17:29.309817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.309857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.309983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.310012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.310128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.310155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.310244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.310271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.310390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.310416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.310558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.310585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.310702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.310729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.310844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.310870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.311010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.311038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.311162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.311192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.311351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.311378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.311491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.311519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.311607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.311635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.311752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.311784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.311880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.311907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.311991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.312019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.312109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.312136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.312248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.312275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.312414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.312441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.312559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.312586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.312669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.312698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.312803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.312853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.313015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.313043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.313160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.313187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.313272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.313299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.313392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.313447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.313591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.313648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.313840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.313887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.314078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.314105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.314194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.314231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.314356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.314383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.314498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.314525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.314668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.314695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.314870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.314897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.315009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.315037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.315119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.315146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.335 [2024-07-15 16:17:29.315273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.335 [2024-07-15 16:17:29.315300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.335 qpair failed and we were unable to recover it. 00:24:43.613 [2024-07-15 16:17:29.316505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.613 [2024-07-15 16:17:29.316541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.613 qpair failed and we were unable to recover it. 00:24:43.613 [2024-07-15 16:17:29.316699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.613 [2024-07-15 16:17:29.316729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.613 qpair failed and we were unable to recover it. 00:24:43.613 [2024-07-15 16:17:29.316865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.613 [2024-07-15 16:17:29.316892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.613 qpair failed and we were unable to recover it. 00:24:43.613 [2024-07-15 16:17:29.316995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.613 [2024-07-15 16:17:29.317024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.613 qpair failed and we were unable to recover it. 00:24:43.613 [2024-07-15 16:17:29.317129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.613 [2024-07-15 16:17:29.317157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.613 qpair failed and we were unable to recover it. 00:24:43.613 [2024-07-15 16:17:29.317274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.613 [2024-07-15 16:17:29.317301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.613 qpair failed and we were unable to recover it. 00:24:43.613 [2024-07-15 16:17:29.317457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.613 [2024-07-15 16:17:29.317493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.613 qpair failed and we were unable to recover it. 00:24:43.613 [2024-07-15 16:17:29.317608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.613 [2024-07-15 16:17:29.317636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.613 qpair failed and we were unable to recover it. 00:24:43.613 [2024-07-15 16:17:29.317733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.613 [2024-07-15 16:17:29.317761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.317872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.317914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.318028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.318057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.318142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.318169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.318279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.318306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.318417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.318445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.318532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.318576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.318769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.318814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.318965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.318992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.319120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.319147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.319267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.319294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.319390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.319418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.319511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.319538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.319667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.319709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.319826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.319870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.319991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.320019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.320161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.320188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.320305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.320332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.320470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.320496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.320587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.320613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.320696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.320722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.320838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.320866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.320995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.321023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.321109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.321136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.321230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.321257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.321389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.321418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.321541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.321571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.321681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.321709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.321880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.321907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.322023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.322051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.322138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.322166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.322249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.322276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.322412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.322439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.322579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.322606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.322722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.322750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.322864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.322894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.323013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.323041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.323162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.323189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.323305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.323333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.323423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.323449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.323572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.323599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.323686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.323715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.323831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.323857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.323971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.323998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.324085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.324111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.324198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.324224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.324351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.324377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.324465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.324491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.324597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.324623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.324743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.324769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.324882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.324908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.325000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.325027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.325111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.325137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.325249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.325275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.325364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.614 [2024-07-15 16:17:29.325390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.614 qpair failed and we were unable to recover it. 00:24:43.614 [2024-07-15 16:17:29.325475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.325502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.325613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.325638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.325722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.325748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.325861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.325887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.325999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.326027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.326140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.326166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.326283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.326309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.326391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.326424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.326510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.326536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.326676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.326702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.326787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.326815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.326899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.326926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.327032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.327059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.327152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.327179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.327296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.327322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.327406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.327432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.327569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.327595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.327682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.327708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.327787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.327813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.327934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.327967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.328075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.328101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.328195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.328221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.328333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.328360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.328468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.328493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.328577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.328603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.328743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.328769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.328881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.328908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.329002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.329029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.329116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.329143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.329224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.329250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.329370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.329396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.329477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.329503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.329588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.329614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.329707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.329748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.329883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.329915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.330051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.330078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.330192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.330218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.330330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.330356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.330464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.330490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.330584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.330611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.330724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.330750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.330893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.330921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.331049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.331076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.331162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.331188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.331280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.331306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.331419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.331445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.331525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.331551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.331668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.331694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.331791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.331818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.331938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.331971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.332086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.332112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.332236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.332262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.332386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.332412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.332548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.332575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.332696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.615 [2024-07-15 16:17:29.332722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.615 qpair failed and we were unable to recover it. 00:24:43.615 [2024-07-15 16:17:29.332811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.332837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.332976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.333012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.333150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.333194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.333367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.333393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.333480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.333507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.333638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.333665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.333792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.333823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.333941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.333974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.334119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.334145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.334230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.334256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.334397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.334423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.334563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.334589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.334676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.334702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.334815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.334842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.334935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.334966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.335084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.335111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.335202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.335229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.335311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.335338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.335430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.335457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.335574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.335600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.335692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.335719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.335811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.335837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.335912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.335938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.336068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.336094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.336212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.336239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.336322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.336349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.336456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.336483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.336564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.336591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.336678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.336704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.336815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.336841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.336935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.336966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.337078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.337105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.337189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.337216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.337361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.337387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.337527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.337553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.337692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.337718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.337803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.337828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.337970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.337997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.338133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.338159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.338297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.338323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.338435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.338462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.338554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.338580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.338693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.338720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.338864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.338891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.339032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.339058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.339171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.339197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.339308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.339338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.339479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.339506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.339651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.339677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.339762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.339788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.339875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.339901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.616 qpair failed and we were unable to recover it. 00:24:43.616 [2024-07-15 16:17:29.339979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.616 [2024-07-15 16:17:29.340006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.340114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.340140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.340251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.340277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.340363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.340390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.340532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.340557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.340660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.340687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.340780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.340807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.340935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.340967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.341089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.341115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.341238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.341264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.341373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.341399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.341498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.341525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.341618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.341644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.341763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.341789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.341881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.341907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.341994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.342029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.342122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.342148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.342267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.342293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.342375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.342399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.342490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.342516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.342632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.342659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.342751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.617 [2024-07-15 16:17:29.342777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.617 qpair failed and we were unable to recover it. 00:24:43.617 [2024-07-15 16:17:29.342871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.342898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.343025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.343051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.343138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.343164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.343263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.343290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.343385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.343411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.343532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.343559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.343646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.343672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.343771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.343799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.343896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.343923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.344030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.344057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.344147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.344173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.344272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.344299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.344410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.344436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.344578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.344608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.344707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.344734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.344879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.344905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.345001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.345026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.345131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.345156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.345279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.345305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.345494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.345551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.345697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.345723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.345839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.345865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.345981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.346017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.346115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.346140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.346226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.346253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.346342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.346369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.346453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.346479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.346595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.346622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.346740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.346767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.346871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.346897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.347024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.347050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.347142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.347169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.347293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.347319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.347464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.347491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.347603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.347629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.347740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.347766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.347849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.347875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.347976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.348011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.348100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.348126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.348221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.348258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.348347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.348373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.348461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.348487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.348585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.348611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.348724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.348751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.348850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.348876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.349002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.349043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.349172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.349200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.349329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.349356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.349469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.349495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.349593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.349619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.349738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.349764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.349854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.349880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.349973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.350001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.350090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.350121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.350206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.350232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.618 [2024-07-15 16:17:29.350349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.618 [2024-07-15 16:17:29.350376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.618 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.350497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.350523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.350607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.350634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.350744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.350770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.350904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.350931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.351031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.351058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.351155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.351181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.351286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.351312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.351397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.351423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.351536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.351562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.351679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.351705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.351788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.351814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.351907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.351934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.352037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.352064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.352151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.352178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.352290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.352316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.352399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.352425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.352549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.352576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.352689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.352715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.352814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.352840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.352936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.352968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.353057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.353083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.353171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.353199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.353296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.353322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.353430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.353456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.353549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.353576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.353690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.353716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.353835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.353862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.353947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.353979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.354079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.354105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.354194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.354222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.354314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.354341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.354439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.354465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.354581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.354607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.354693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.354720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.354833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.354859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.354983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.355010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.355097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.355124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.355220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.355246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.355404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.355445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.355562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.355591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.355702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.355729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.355846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.355873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.355965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.355994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.356093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.356121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.356205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.356232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.356324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.356351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.356436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.356464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.356553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.356581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.356728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.356754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.356872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.356900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.356989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.357017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.357132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.357164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.357306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.357342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.357482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.357532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.619 [2024-07-15 16:17:29.357673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.619 [2024-07-15 16:17:29.357709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.619 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.357898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.357933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.358066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.358094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.358193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.358220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.358336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.358365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.358581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.358621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.358827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.358861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.359070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.359097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.359196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.359223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.359349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.359376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.359522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.359548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.359676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.359703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.359839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.359876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.360032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.360059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.360144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.360171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.360258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.360287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.360392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.360419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.360531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.360559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.360641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.360668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.360787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.360814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.360928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.360961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.361064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.361090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.361204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.361231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.361320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.361348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.361470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.361499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.361662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.361698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.361884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.361941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.362050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.362077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.362169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.362195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.362337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.362384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.362515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.362561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.362677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.362726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.362862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.362888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.363033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.363061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.363149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.363176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.363289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.363316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.363458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.363484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.363571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.363598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.363691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.363718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.363835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.363861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.363979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.364007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.364105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.364131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.364215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.364241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.364324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.364351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.364449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.364475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.364564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.364590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.620 [2024-07-15 16:17:29.364664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.620 [2024-07-15 16:17:29.364690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.620 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.364794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.364821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.364967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.364995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.365093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.365120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.365213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.365240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.365394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.365434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.365552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.365580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.365670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.365698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.365815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.365842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.365990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.366017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.366151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.366198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.366336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.366383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.366534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.366560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.366643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.366669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.366791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.366817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.366928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.366962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.367055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.367081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.367168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.367195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.367302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.367333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.367415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.367441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.367559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.367585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.367678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.367706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.367827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.367854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.367965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.367991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.368108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.368134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.368265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.368313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.368400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.368426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.368511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.368536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.368628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.368654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.368729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.368756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.368888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.368927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.369086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161d0e0 is same with the state(5) to be set 00:24:43.621 [2024-07-15 16:17:29.369220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.369264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.369381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.369408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.369494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.369521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.369611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.369638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.369721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.369748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.369864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.369891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.369982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.370010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.370146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.370173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.370312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.370346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.370517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.370581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.370866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.370902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.371041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.371068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.371164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.371190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.371279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.371306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.371405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.371457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.371661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.371739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.371937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.371969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.372083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.372109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.372227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.372253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.372369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.372396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.372656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.372721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.372916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.372943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.373046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.373072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.373179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.621 [2024-07-15 16:17:29.373220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.621 qpair failed and we were unable to recover it. 00:24:43.621 [2024-07-15 16:17:29.373346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.373375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.373491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.373518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.373633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.373659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.373751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.373778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.373893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.373920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.374031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.374058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.374152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.374179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.374284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.374311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.374388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.374415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.374531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.374558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.374676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.374703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.374823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.374849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.374972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.374999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.375110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.375136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.375222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.375248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.375361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.375388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.375509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.375542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.375660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.375686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.375770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.375797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.375902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.375928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.376051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.376078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.376186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.376212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.376345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.376371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.376479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.376505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.376627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.376653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.376767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.376793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.376906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.376933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.377047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.377087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.377188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.377228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.377362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.377391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.377504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.377532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.377650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.377677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.377814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.377851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.377999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.378032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.378144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.378172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.378310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.378337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.378453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.378482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.378705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.378753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.378847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.378874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.378970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.378997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.379107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.379134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.379230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.379257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.379338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.379364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.379486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.379512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.379615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.379642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.379730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.379756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.379868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.379908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.380018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.380046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.380135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.380162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.380259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.380286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.380362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.380389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.380468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.380495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.380594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.380620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.380730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.380769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.380917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.380945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.622 [2024-07-15 16:17:29.381070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.622 [2024-07-15 16:17:29.381097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.622 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.381207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.381242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.381381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.381407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.381500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.381526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.381661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.381713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.381798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.381824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.381945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.381979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.382110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.382137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.382219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.382246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.382360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.382406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.382546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.382590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.382710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.382747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.382860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.382887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.383013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.383040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.383120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.383146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.383290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.383317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.383429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.383454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.383542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.383569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.383692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.383725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.383869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.383894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.384012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.384039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.384155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.384181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.384317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.384343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.384535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.384570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.384692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.384742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.384856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.384882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.384977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.385015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.385141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.385167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.385265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.385295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.385448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.385501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.385672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.385743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.385859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.385885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.385979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.386028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.386148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.386177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.386274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.386302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.386441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.386469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.386554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.386581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.386672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.386701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.386794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.386821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.386909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.386934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.387061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.387087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.387227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.387277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.387396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.387441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.387582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.387616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.387831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.387857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.388006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.388033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.388123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.388148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.388283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.388344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.388522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.388558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.388723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.388770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.623 qpair failed and we were unable to recover it. 00:24:43.623 [2024-07-15 16:17:29.388910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.623 [2024-07-15 16:17:29.388936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.389046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.389072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.389156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.389183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.389360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.389408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.389542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.389600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.389744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.389777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.389898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.389924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.390041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.390067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.390156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.390196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.390298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.390328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.390445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.390474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.390600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.390629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.390754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.390781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.390866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.390895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.390985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.391020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.391099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.391126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.391213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.391243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.391361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.391387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.391495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.391526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.391639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.391667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.391784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.391810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.391946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.391990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.392082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.392109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.392198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.392224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.392324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.392351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.392489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.392516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.392628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.392655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.392762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.392788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.392899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.392925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.393026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.393053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.393165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.393193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.393321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.393347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.393496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.393522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.393649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.393689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.393789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.393817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.393918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.393945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.394040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.394066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.394157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.394183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.394272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.394299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.394431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.394483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.394622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.394667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.394783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.394819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.394947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.394978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.395071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.395097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.395204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.395237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.395430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.395487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.395570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.395598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.395685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.395712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.395834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.395861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.395992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.396019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.396113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.396139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.396247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.396274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.396359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.396385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.396521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.624 [2024-07-15 16:17:29.396547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.624 qpair failed and we were unable to recover it. 00:24:43.624 [2024-07-15 16:17:29.396662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.396688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.396804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.396831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.396950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.396983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.397100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.397126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.397236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.397266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.397378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.397404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.397492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.397518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.397599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.397625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.397750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.397777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.397893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.397919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.398067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.398107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.398235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.398263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.398355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.398382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.398501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.398528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.398633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.398659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.398768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.398809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.398897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.398925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.399087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.399127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.399251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.399288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.399528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.399563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.399730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.399782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.399973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.400006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.400100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.400128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.400212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.400239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.400324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.400350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.400469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.400496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.400634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.400662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.400813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.400848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.401015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.401043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.401160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.401188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.401316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.401345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.401488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.401535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.401653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.401698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.401786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.401812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.401928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.402095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.402218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.402245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.402357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.402384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.402478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.402505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.402645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.402671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.402789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.402815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.402897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.402925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.403036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.403067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.403193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.403244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.403387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.403416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.403552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.403600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.403719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.403746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.403836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.403863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.403992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.404019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.404113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.404139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.404257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.404296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.404447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.404484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.404698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.404750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.404944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.404975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.405101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.405127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.405220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.405270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.405447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.625 [2024-07-15 16:17:29.405499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.625 qpair failed and we were unable to recover it. 00:24:43.625 [2024-07-15 16:17:29.405659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.405713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.405868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.405918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.406029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.406057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.406169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.406196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.406372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.406405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.406612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.406649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.406773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.406815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.406983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.407016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.407109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.407134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.407258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.407287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.407418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.407465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.407616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.407662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.407776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.407803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.407914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.407941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.408061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.408088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.408199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.408238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.408319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.408346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.408453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.408500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.408617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.408646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.408756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.408796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.408920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.408949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.409049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.409076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.409196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.409231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.409372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.409399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.409544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.409571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.409664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.409693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.409813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.409839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.409988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.410021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.410110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.410136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.410232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.410262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.410369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.410395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.410489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.410518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.410607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.410635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.410718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.410745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.410830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.410857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.410944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.410981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.411100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.411127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.411231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.411258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.411340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.411367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.411510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.411540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.411684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.411722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.411868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.411895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.412022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.412047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.412141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.412167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.412295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.412321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.412475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.412511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.412701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.412762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.412878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.412915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.413019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.413057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.413165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.413192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.413312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.413339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.413450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.413477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.413573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.413602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.413727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.413791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.413882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.413911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.626 [2024-07-15 16:17:29.414020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.626 [2024-07-15 16:17:29.414053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.626 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.414145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.414171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.414294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.414344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.414466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.414492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.414604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.414632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.414737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.414777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.414871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.414899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.415017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.415045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.415122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.415148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.415271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.415297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.415395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.415423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.415536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.415562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.415638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.415663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.415763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.415791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.415885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.415913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.416031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.416058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.416173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.416211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.416390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.416446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.416608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.416647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.416803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.416830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.416943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.416978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.417090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.417116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.417232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.417258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.417345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.417371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.417484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.417510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.417658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.417684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.417822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.417848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.417950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.418012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.418103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.418132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.418239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.418278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.418399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.418427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.418602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.418629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.418748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.418774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.418864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.418892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.418999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.419026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.419158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.419207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.419344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.419394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.419510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.419559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.419675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.419702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.419832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.419872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.419994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.420021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.420121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.420149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.420242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.420268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.420381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.420407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.420502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.420529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.420614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.420641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.420723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.420749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.420893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.420922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.421046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.421074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.627 [2024-07-15 16:17:29.421201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.627 [2024-07-15 16:17:29.421252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.627 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.421489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.421543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.421717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.421795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.421964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.421990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.422081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.422107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.422210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.422255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.422377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.422405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.422501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.422529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.422640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.422667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.422752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.422779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.422888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.422915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.423045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.423072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.423163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.423189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.423313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.423341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.423481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.423508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.423624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.423651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.423762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.423789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.423897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.423924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.424054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.424098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.424245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.424274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.424363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.424389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.424507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.424533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.424643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.424669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.424761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.424788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.424873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.424899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.424986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.425012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.425129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.425155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.425246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.425272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.425392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.425419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.425534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.425560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.425674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.425702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.425818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.425844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.426020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.426082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.426194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.426229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.426345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.426372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.426490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.426516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.426610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.426638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.426784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.426810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.426903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.426929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.427038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.427065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.427149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.427176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.427257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.427281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.427394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.427421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.427563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.427590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.427761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.427820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.428038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.428072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.428187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.428225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.428366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.428392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.428524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.428578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.428690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.428717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.428833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.428859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.428970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.428998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.429117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.429143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.429226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.429252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.429371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.429398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.429511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.628 [2024-07-15 16:17:29.429537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.628 qpair failed and we were unable to recover it. 00:24:43.628 [2024-07-15 16:17:29.429644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.429670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.429751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.429777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.429858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.429885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.430009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.430037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.430123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.430150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.430320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.430373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.430523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.430580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.430777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.430848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.430992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.431041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.431179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.431208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.431350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.431377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.431546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.431573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.431847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.431913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.432135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.432165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.432276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.432327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.432573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.432600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.432696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.432729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.432828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.432857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.432966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.433016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.433168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.433196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.433286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.433313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.433489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.433541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.433711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.433765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.433852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.433878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.433961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.433986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.434100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.434126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.434242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.434269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.434391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.434418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.434531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.434557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.434642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.434668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.434803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.434843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.434993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.435022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.435140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.435166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.435256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.435282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.435395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.435422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.435574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.435600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.435689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.435715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.435878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.435918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.436042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.436083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.436179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.436208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.436362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.436416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.436587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.436613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.436730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.436758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.436849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.436877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.436964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.437002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.437122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.437150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.437345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.437405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.437566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.437613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.437795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.437846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.437967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.438005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.438119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.438145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.438243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.438271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.438388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.438415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.438530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.438557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.438673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.629 [2024-07-15 16:17:29.438700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.629 qpair failed and we were unable to recover it. 00:24:43.629 [2024-07-15 16:17:29.438792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.438818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.438943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.438995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.439126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.439156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.439277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.439304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.439422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.439449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.439569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.439596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.439705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.439732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.439813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.439839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.439960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.439987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.440100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.440125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.440237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.440264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.440421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.440473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.440717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.440769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.440934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.440967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.441090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.441116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.441216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.441243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.441361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.441387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.441554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.441603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.441805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.441854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.442021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.442045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.442162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.442186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.442278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.442334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.442607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.442655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.442894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.442918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.443040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.443064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.443198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.443222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.443304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.443329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.443488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.443544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.443738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.443801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.444010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.444058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.444143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.444166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.444257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.444281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.444422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.444446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.444724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.444750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.444852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.444877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.444965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.444990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.445080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.445106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.445200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.445226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.445334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.445359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.445455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.445479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.445619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.445644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.445734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.445773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.445888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.445926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.446027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.446053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.446198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.446223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.446333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.446358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.446450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.446475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.446565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.446592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.630 [2024-07-15 16:17:29.446806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.630 [2024-07-15 16:17:29.446831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.630 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.449969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.450011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.450118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.450146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.450307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.450335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.450457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.450485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.450581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.450608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.450732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.450758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.450860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.450886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.451095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.451123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.451285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.451312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.451445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.451470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.451615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.451643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.451799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.451826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.451959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.451985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.452089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.452115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.452247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.452275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.452400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.452428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.452525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.452550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.452664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.452692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.452818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.452846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.453042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.453073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.453188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.453221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.453340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.453366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.453468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.453493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.453610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.453636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.453829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.453856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.454007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.454033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.454131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.454157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.454283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.454310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.454400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.454426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.454541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.454568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.454684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.454711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.454803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.454830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.454923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.454947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.455095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.455119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.455203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.455239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.455347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.455372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.456969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.457006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.457105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.457131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.457220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.457245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.457378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.457404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.457527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.457552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.457760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.457786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.457886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.457912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.458067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.458094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.458219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.458246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.458366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.458394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.458501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.458526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.458649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.458675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.458775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.458802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.458945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.458981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.459080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.459106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.459227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.459253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.459347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.459372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.459454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.459479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.460067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.631 [2024-07-15 16:17:29.460098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.631 qpair failed and we were unable to recover it. 00:24:43.631 [2024-07-15 16:17:29.460233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.460261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.460350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.460377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.460466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.460493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.460607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.460633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.460733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.460778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.460925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.460960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.461046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.461072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.461189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.461216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.461335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.461363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.461483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.461510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.461629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.461657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.461773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.461801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.461917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.461945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.462039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.462067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.462183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.462211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.462299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.462326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.462464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.462490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.462606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.462634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.462736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.462764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.462886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.462916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.463056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.463084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.463179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.463205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.463382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.463440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.463638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.463665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.463751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.463777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.463898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.463925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.464057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.464083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.464164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.464190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.464359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.464412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.464498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.464522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.464663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.464690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.464807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.464834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.464922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.464949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.465051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.465079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.465200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.465227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.465363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.465390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.465506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.465533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.465676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.465703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.465828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.465854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.465975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.466007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.466208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.466271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.466367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.466392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.466484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.466509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.466626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.466651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.466772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.466801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.466896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.466922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.467077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.467103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.467237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.467263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.467351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.467376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.467484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.467509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.467643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.467669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.467797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.467823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.467950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.467984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.468103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.468129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.468259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.468286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.632 [2024-07-15 16:17:29.468483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.632 [2024-07-15 16:17:29.468509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.632 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.468623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.468648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.468768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.468795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.468938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.468974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.469096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.469120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.469238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.469264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.469352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.469377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.469517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.469542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.469621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.469647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.469761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.469786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.469896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.469922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.470046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.470073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.470192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.470218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.470298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.470325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.470440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.470466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.470560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.470585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.470703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.470729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.470846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.470872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.470973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.470999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.471093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.471118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.471201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.471226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.471334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.471360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.471474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.471501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.471609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.471636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.471828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.471854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.471975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.472002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.472200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.472226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.472348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.472376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.472484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.472510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.472624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.472654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.472770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.472796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.472918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.472945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.473118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.473175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.473316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.473342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.473426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.473453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.473588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.473614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.473706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.473732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.473813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.473837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.473921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.473946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.474066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.474093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.474211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.474237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.474320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.474345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.474450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.474474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.474566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.474592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.474746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.474786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.474910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.474940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.475075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.475104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.475248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.475275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.475400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.475427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.475513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.475538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.633 qpair failed and we were unable to recover it. 00:24:43.633 [2024-07-15 16:17:29.475681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.633 [2024-07-15 16:17:29.475707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.475796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.475822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.475913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.475939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.476113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.476168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.476253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.476278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.476483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.476509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.476632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.476660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.476774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.476801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.476919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.476946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.477036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.477061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.477180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.477206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.477387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.477453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.477599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.477626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.477737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.477764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.477850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.477876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.478002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.478030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.478111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.478138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.478226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.478252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.478336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.478361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.478446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.478477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.478624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.478690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.478871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.478898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.479013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.479040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.479121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.479146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.479239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.479267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.479360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.479387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.479499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.479526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.479672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.479737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.479880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.479907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.480018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.480045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.480138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.480164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.480341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.480392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.480560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.480620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.480744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.480770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.480886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.480912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.481034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.481101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.481271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.481327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.481500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.481564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.481703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.481729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.481843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.481869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.482012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.482070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.482158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.482183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.482409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.482467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.482606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.482632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.482747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.482773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.482869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.482896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.482982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.483008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.483217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.483277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.483448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.483505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.483642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.483669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.483786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.483813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.483905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.483930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.484117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.484179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.484361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.484422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.484539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.484565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.484710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.484737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.634 qpair failed and we were unable to recover it. 00:24:43.634 [2024-07-15 16:17:29.484844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.634 [2024-07-15 16:17:29.484870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.484984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.485011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.485186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.485240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.485458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.485488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.485608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.485634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.485752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.485778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.485868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.485893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.485982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.486007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.486180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.486241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.486421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.486475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.486614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.486641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.486840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.486866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.487016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.487089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.487270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.487315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.487508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.487569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.487684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.487710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.487827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.487853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.487969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.487997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.488119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.488146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.488237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.488262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.488357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.488382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.488496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.488523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.488629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.488656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.488732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.488757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.488834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.488860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.488973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.488999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.489108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.489134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.489218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.489244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.489351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.489377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.489475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.489501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.489592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.489618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.489716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.489756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.489851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.489878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.490024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.490052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.490225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.490293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.490599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.490664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.490900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.490990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.491261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.491328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.491552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.491617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.491841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.491868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.491982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.492008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.492123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.492190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.492440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.492504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.492761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.492840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.493109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.493137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.493333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.493398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.493636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.493702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.494009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.494037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.494176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.494202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.494368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.494395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.494631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.494697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.494902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.495011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.495153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.495219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.495523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.495588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.495832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.495859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.635 qpair failed and we were unable to recover it. 00:24:43.635 [2024-07-15 16:17:29.495999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.635 [2024-07-15 16:17:29.496026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.496141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.496168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.496330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.496397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.496718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.496782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.497015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.497043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.497151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.497178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.497370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.497435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.497711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.497776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.498038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.498066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.498181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.498208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.498324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.498352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.498669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.498734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.498909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.498936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.499042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.499069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.499158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.499185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.499289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.499328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.499512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.499564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.499742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.499806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.499883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.499908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.500079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.500132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.500320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.500371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.500603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.500679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.500796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.500823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.500968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.500995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.501180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.501253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.501473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.501528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.501647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.501673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.501814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.501841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.501963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.501990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.502147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.502204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.502378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.502404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.502563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.502616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.502710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.502736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.502850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.502876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.502992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.503017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.503108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.503134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.503249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.503275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.503388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.503415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.503533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.503560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.503670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.503696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.503813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.503840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.503953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.503991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.504122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.504172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.504290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.504316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.504448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.504474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.504566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.504591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.504728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.504755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.504897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.504924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.505053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.636 [2024-07-15 16:17:29.505080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.636 qpair failed and we were unable to recover it. 00:24:43.636 [2024-07-15 16:17:29.505161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.505186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.505371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.505424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.505564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.505590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.505681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.505705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.505821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.505847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.505939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.505976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.506139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.506201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.506337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.506413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.506552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.506579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.506666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.506692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.506774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.506799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.506935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.507000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.507285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.507357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.507650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.507715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.507894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.507920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.508013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.508039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.508158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.508182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.508390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.508416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.508688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.508753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.509034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.509061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.509241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.509307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.509512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.509581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.509785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.509811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.509932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.509966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.510087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.510114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.510200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.510265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.510570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.510639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.510850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.510876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.511003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.511030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.511141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.511168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.511284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.511311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.511422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.511449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.511596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.511661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.511914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.512008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.512102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.512127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.512327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.512394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.512650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.512718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.513012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.513040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.513157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.513184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.513300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.513326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.513568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.513634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.513883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.513949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.514140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.514167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.514285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.514349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.514629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.514694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.514981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.515027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.515168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.515198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.515319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.515345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.515426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.515452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.515641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.515706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.516027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.516055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.516137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.516163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.516370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.516434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.516673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.516741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.517030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.637 [2024-07-15 16:17:29.517057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.637 qpair failed and we were unable to recover it. 00:24:43.637 [2024-07-15 16:17:29.517174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.517202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.517394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.517421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.517532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.517558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.517732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.517783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.517966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.517993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.518091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.518117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.518210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.518290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.518578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.518644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.518898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.518979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.519130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.519157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.519272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.519300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.519418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.519445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.519721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.519786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.519993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.520020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.520132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.520158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.520347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.520412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.520630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.520696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.520901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.520927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.521063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.521091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.521201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.521228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.521344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.521371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.521586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.521651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.521895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.521977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.522116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.522144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.522262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.522330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.522640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.522707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.522973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.523040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.523257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.523325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.523616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.523682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.523898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.523980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.524224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.524290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.524539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.524616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.524874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.524941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.525241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.525307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.525517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.525585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.525834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.525899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.526187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.526254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.526562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.526628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.526831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.526896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.527157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.527224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.527471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.527536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.527817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.527883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.528168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.528233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.528446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.528513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.528814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.528879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.529207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.529274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.529519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.529584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.529869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.529935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.530209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.530275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.530532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.530599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.530895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.530975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.531220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.531286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.531586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.531650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.531937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.532016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.532305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.532370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.532629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.638 [2024-07-15 16:17:29.532696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.638 qpair failed and we were unable to recover it. 00:24:43.638 [2024-07-15 16:17:29.532971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.533038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.533322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.533388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.533683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.533749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.534007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.534075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.534328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.534393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.534648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.534712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.534998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.535064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.535306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.535373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.535669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.535733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.535991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.536080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.536322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.536389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.536684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.536749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.537007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.537076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.537339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.537403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.537700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.537764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.538059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.538145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.538403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.538469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.538736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.538801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.539055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.539120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.539378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.539443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.539704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.539770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.540039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.540104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.540320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.540384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.540641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.540705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.540976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.541042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.541328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.541392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.541642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.541710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.542002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.542069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.542303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.542368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.542644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.542710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.543003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.543069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.543355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.543420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.543720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.543784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.544037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.544106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.544335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.544403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.544674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.544739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.545011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.545081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.545372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.545438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.545687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.545752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.546005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.546075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.546359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.546424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.546713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.546779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.547036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.547103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.547388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.547454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.547737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.547802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.548052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.548118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.548357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.548422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.548725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.548791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.549039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.639 [2024-07-15 16:17:29.549106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.639 qpair failed and we were unable to recover it. 00:24:43.639 [2024-07-15 16:17:29.549350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.549415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.549649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.549715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.549999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.550066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.550319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.550384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.550672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.550737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.551023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.551090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.551340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.551419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.551674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.551738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.551993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.552061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.552274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.552340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.552591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.552658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.552926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.553035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.553300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.553366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.553652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.553718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.554004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.554070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.554355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.554421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.554685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.554749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.555034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.555100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.555353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.555418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.555628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.555695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.556008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.556075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.556337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.556402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.556666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.556731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.556982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.557048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.557307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.557376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.557612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.557678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.557976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.558042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.558285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.558352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.558644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.558710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.559009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.559076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.559326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.559394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.559614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.559682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.559916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.559994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.560292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.560357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.560639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.560704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.561002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.561068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.561320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.561386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.561651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.561719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.561986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.562053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.562304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.562369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.562608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.562673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.562978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.563044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.563329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.563394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.563661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.563727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.563949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.564026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.564270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.564338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.564626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.564701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.565001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.565067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.565254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.565319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.565577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.565645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.565928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.566009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.566215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.566283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.566551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.566616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.566898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.566976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.567242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.567307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.567564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.567628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.640 qpair failed and we were unable to recover it. 00:24:43.640 [2024-07-15 16:17:29.567926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.640 [2024-07-15 16:17:29.568003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.568243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.568308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.568552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.568620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.568900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.568995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.569266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.569334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.569562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.569629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.569925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.570007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.570216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.570283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.570550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.570615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.570917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.570997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.571251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.571317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.571612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.571677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.571983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.572048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.572331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.572395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.572656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.572722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.573011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.573077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.573363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.573428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.573644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.573712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.574005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.574072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.574305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.574371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.574662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.574727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.575014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.575080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.575334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.575401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.575664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.575729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.575991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.576057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.576305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.576369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.576630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.576694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.577005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.577072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.577359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.577424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.577676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.577741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.578006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.578081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.578367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.578431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.578681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.578748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.578996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.579063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.579346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.579413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.579666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.579731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.580014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.580080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.580287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.580352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.580632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.580696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.580945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.581025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.581283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.581349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.581635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.581699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.581951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.582030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.582286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.582354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.582622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.582690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.582982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.583049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.583254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.583322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.583576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.583641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.583847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.583913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.584226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.584291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.584545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.584610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.584900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.584988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.585251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.585318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.585569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.585637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.585845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.585913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.586157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.641 [2024-07-15 16:17:29.586223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.641 qpair failed and we were unable to recover it. 00:24:43.641 [2024-07-15 16:17:29.586488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.586553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.586898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.587010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.587322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.587393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.587660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.587727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.587990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.588058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.588306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.588371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.588607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.588686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.588916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.589008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.589314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.589381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.589656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.589723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.590000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.590071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.590291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.590358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.590613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.590680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.590989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.591056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.591343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.591425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.591742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.591808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.592056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.592137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.592378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.592445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.592703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.592775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.593046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.593125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.593397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.593470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.593712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.593777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.594027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.594094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.594412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.594481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.594746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.594810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.595103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.595176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.595455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.595531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.595808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.595879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.596179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.642 [2024-07-15 16:17:29.596246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.642 qpair failed and we were unable to recover it. 00:24:43.642 [2024-07-15 16:17:29.596577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.909 [2024-07-15 16:17:29.596643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.596932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.597014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.597283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.597351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.597652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.597718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.598024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.598094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.598352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.598419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.598704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.598770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.599028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.599095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.599350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.599416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.599665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.599738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.599996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.600064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.600330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.600401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.600708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.600775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.601043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.601123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.601353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.601421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.601641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.601715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.601942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.602027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.602249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.602313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.602592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.602655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.602912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.602992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.603250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.603315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.603530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.603595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.603874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.603937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.604201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.604268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.604520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.604583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.604805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.604867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.605173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.605249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.605467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.605535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.605764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.605827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.606076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.606142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.606363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.606428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.606650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.606717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.607004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.607071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.607313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.607379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.607651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.607718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.608000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.608067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.608264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.608330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.608630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.608695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.609007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.609075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.609335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.609401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.609627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.609706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.610001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.610068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.610299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.610366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.610612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.610678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.610941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.611024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.611333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.611399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.611646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.611717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.612015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.612082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.612370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.612435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.612676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.612743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.613003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.613071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.613323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.613387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.613636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.613699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.613988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.614062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.614318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.614382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.614634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.614698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.614975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.615040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.615286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.615349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.615628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.615691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.615987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.616053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.616304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.616368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.616662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.616725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.617004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.617069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.617361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.617425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.617710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.617774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.910 [2024-07-15 16:17:29.618019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.910 [2024-07-15 16:17:29.618085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.910 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.618380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.618443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.618696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.618760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.619041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.619106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.619365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.619428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.619654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.619717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.619968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.620033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.620307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.620370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.620623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.620687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.620923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.621000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.621204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.621266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.621515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.621579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.621780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.621843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.622127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.622193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.622450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.622512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.622723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.622789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.623087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.623153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.623445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.623510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.623748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.623813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.624077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.624142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.624394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.624457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.624739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.624802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.625052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.625117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.625371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.625434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.625675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.625740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.626024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.626089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.626334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.626397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.626658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.626721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.626932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.627013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.627307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.627371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.627565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.627628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.627907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.627988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.628248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.628311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.628604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.628666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.628922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.629001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.629235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.629298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.629583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.629646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.629898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.629978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.630210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.630273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.630562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.630626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.630906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.631004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.631280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.631343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.631596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.631659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.631973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.632038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.632297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.632363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.632590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.632656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.632913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.632995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.633283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.633347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.633602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.633667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.633930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.634011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.634309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.634372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.634617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.634680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.634985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.635051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.635302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.635365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.635646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.635708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.635971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.636036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.636292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.636365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.636667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.636730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.636938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.637017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.637284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.637347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.637577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.637640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.637889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.637951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.638233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.638313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.638605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.638669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.638977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.639041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.639258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.639321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.639615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.639677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.911 [2024-07-15 16:17:29.639984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.911 [2024-07-15 16:17:29.640071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.911 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.640320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.640384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.640628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.640693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.640939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.641021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.641275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.641338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.641637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.641701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.642014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.642080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.642377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.642441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.642726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.642790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.643082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.643147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.643436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.643500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.643748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.643810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.644086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.644149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.644375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.644442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.644671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.644734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.644995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.645059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.645351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.645424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.645670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.645734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.645951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.646032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.646287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.646350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.646605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.646669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.646923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.647018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.647285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.647349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.647633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.647697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.647938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.648020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.648305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.648369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.648615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.648677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.648971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.649035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.649244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.649307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.649572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.649634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.649862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.649928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.650247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.650311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.650602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.650665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.650972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.651038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.651291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.651355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.651636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.651699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.651948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.652045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.652333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.652397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.652672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.652733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.652946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.653038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.653323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.653387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.653615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.653679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.653937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.654016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.654271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.654343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.654631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.654693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.654985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.655050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.655308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.655371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.655617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.655683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.656000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.656066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.656299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.656363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.656599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.656662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.656953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.657032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.657274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.657337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.657590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.657656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.657911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.657991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.658266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.658330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.658552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.658615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.658839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.658906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.659189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.912 [2024-07-15 16:17:29.659253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.912 qpair failed and we were unable to recover it. 00:24:43.912 [2024-07-15 16:17:29.659482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.659545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.659806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.659870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.660169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.660232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.660477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.660539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.660786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.660852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.661144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.661209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.661491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.661554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.661851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.661915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.662220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.662283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.662535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.662598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.662893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.662971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.663197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.663260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.663552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.663615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.663912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.664001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.664260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.664324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.664583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.664646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.664929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.665009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.665307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.665370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.665654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.665716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.665978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.666042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.666285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.666348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.666624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.666686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.666982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.667046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.667299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.667363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.667653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.667716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.667924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.668006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.668301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.668365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.668656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.668718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.669014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.669080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.669326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.669390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.669648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.669712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.670009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.670074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.670321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.670384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.670665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.670728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.670971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.671035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.671283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.671345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.671584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.671646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.671875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.671938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.672194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.672257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.672506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.672570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.672827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.672890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.673202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.673266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.673509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.673574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.673833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.673896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.674205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.674269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.674531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.674595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.674903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.674984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.675277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.675341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.675623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.675686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.675953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.676051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.676303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.676367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.676662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.676725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.677016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.677096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.677340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.677402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.677672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.677735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.678026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.678090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.678297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.678360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.678604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.678667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.678930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.679007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.679295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.679358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.679654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.679717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.679930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.680022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.680281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.680345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.680564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.680628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.680833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.680896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.681206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.681270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.681585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.681648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.681929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.682009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.682302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.682366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.913 [2024-07-15 16:17:29.682621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.913 [2024-07-15 16:17:29.682684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.913 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.682947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.683026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.683254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.683318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.683599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.683662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.683913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.684003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.684210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.684273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.684519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.684583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.684876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.684939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.685225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.685289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.685541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.685605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.685859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.685932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.686219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.686282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.686577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.686640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.686943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.687021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.687262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.687325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.687548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.687611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.687829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.687891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.688202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.688267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.688538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.688603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.688843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.688910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.689139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.689202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.689457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.689520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.689806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.689868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.690137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.690202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.690499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.690562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.690848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.690911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.691198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.691262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.691563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.691626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.691920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.692009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.692294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.692357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.692615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.692679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.692942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.693022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.693216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.693282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.693515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.693579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.693823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.693886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.694095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.694159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.694443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.694505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.694796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.694859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.695147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.695212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.695467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.695530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.695815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.695878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.696148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.696213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.696423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.696486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.696744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.696806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.697094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.697160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.697448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.697511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.697753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.697818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.698109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.698173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.698425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.698489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.698721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.698785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.699036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.699102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.699384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.699449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.699732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.699795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.700082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.700146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.700438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.700502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.700787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.700851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.701161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.701224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.701507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.701570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.701855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.701918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.702229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.702293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.702556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.702620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.702914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.702993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.703277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.703341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.703622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.703685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.703931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.704011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.704281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.704345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.704626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.704690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.704937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.705029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.914 [2024-07-15 16:17:29.705279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.914 [2024-07-15 16:17:29.705342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.914 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.705604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.705668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.705919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.706002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.706285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.706349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.706611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.706675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.706929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.707008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.707301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.707365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.707608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.707672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.707871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.707934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.708232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.708296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.708597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.708670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.708928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.709021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.709282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.709346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.709608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.709671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.709921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.710002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.710263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.710326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.710587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.710651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.710921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.711000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.711214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.711279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.711543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.711606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.711896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.711974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.712237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.712301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.712584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.712647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.712985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.713050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.713283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.713346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.713639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.713702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.713942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.714023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.714279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.714342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.714626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.714688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.714997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.715063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.715317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.715382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.715596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.715659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.715902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.715980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.716229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.716293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.716536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.716599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.716890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.716954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.717232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.717295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.717522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.717594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.717850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.717913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.718172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.718235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.718533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.718596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.718838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.718901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.719186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.719250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.719542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.719606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.719870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.719933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.720204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.720267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.720558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.720621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.720880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.720943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.721250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.721313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.721567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.721630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.721865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.721932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.722223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.722289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.722514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.722581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.722878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.722945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.723230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.723297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.723570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.723635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.723893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.723985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.724250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.724317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.724608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.724681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.724946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.725044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.725335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.725402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.725668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.725734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.915 qpair failed and we were unable to recover it. 00:24:43.915 [2024-07-15 16:17:29.726003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.915 [2024-07-15 16:17:29.726079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.726372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.726438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.726739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.726814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.727129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.727196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.727460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.727527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.727794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.727859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.728131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.728199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.728519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.728586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.728861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.728927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.729205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.729270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.729532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.729609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.729910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.729991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.730257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.730328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.730593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.730658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.730944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.731025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.731340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.731406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.731654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.731721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.731992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.732060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.732291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.732358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.732587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.732653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.732897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.733006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.733270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.733335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.733621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.733700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.733999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.734068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.734319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.734385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.734656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.734721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.734987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.735057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.735342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.735408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.735696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.735771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.736012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.736077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.736378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.736459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.736735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.736801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.737102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.737169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.737472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.737537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.737773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.737846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.738153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.738221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.738523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.738590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.738823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.738889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.739112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.739179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.739485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.739560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.739870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.739949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.740236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.740302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.740555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.740621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.740922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.741016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.741323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.741390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.741657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.741723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.741990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.742058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.742348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.742414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.742717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.742783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.743086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.743167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.743469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.743535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.743831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.743911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.744186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.744253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.744504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.744584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.744838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.744904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.745181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.745260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.745575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.745644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.745869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.745937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.746199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.746265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.746557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.746620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.746918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.747014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.747276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.747342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.747640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.747706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.747994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.748062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.748334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.748400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.748618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.748684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.748939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.749034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.749284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.749351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.749622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.916 [2024-07-15 16:17:29.749687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.916 qpair failed and we were unable to recover it. 00:24:43.916 [2024-07-15 16:17:29.749876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.749939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.750211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.750284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.750580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.750644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.750904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.750979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.751242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.751305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.751553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.751616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.751874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.751937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.752217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.752280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.752562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.752626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.752871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.752934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.753234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.753296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.753492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.753555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.753808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.753871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.754115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.754178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.754427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.754490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.754747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.754811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.755095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.755159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.755414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.755480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.755774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.755838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.756080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.756144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.756390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.756454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.756737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.756800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.757059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.757123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.757419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.757483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.757768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.757831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.758120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.758184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.758440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.758504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.758752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.758816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.759051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.759126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.759420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.759484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.759775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.759838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.760063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.760127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.760414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.760478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.760787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.760848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.761162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.761226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.761523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.761587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.761834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.761900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.762169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.762234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.762501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.762565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.762813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.762876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.763123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.763188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.763441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.763506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.763813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.763877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.764160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.764226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.764509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.764572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.764821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.764884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.765165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.765231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.765479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.765543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.765791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.765857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.766145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.766211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.766515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.766578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.766793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.766856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.767089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.767153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.767426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.767489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.767746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.767810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.768056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.768122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.768425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.768489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.768771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.768835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.769098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.769162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.769431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.769494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.769748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.917 [2024-07-15 16:17:29.769812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.917 qpair failed and we were unable to recover it. 00:24:43.917 [2024-07-15 16:17:29.770111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.770174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.770459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.770523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.770736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.770799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.771033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.771097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.771348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.771411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.771691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.771754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.772019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.772084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.772373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.772437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.772729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.772792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.773064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.773129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.773420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.773483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.773742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.773804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.774068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.774132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.774392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.774456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.774748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.774811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.775066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.775130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.775416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.775478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.775730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.775795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.776092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.776158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.776461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.776524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.776808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.776871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.777137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.777202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.777497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.777561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.777843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.777906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.778207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.778271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.778504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.778567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.778855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.778918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.779232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.779295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.779532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.779598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.779845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.779909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.780219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.780283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.780540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.780604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.780820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.780884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.781159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.781225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.781522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.781585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.781869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.781942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.782261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.782323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.782584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.782648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.782868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.782932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.783248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.783312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.783597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.783661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.783971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.784035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.784293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.784357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.784591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.784655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.784914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.785009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.785252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.785316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.785601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.785665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.785918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.785999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.786263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.786327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.786580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.786647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.786848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.786911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.787191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.787255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.787516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.787579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.787867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.787932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.788198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.788263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.788512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.788575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.788864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.788928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.789213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.789279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.789545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.789608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.789860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.789924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.790154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.790218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.790445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.790508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.790775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.790848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.791125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.791190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.791489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.791551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.791828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.791891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.792137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.792201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.792440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.792503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.918 qpair failed and we were unable to recover it. 00:24:43.918 [2024-07-15 16:17:29.792800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.918 [2024-07-15 16:17:29.792862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.793161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.793225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.793481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.793544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.793798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.793861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.794161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.794224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.794504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.794567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.794848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.794910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.795192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.795255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.795474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.795538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.795807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.795871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.796105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.796170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.796456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.796519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.796799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.796862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.797145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.797212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.797455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.797518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.797766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.797829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.798122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.798188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.798433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.798496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.798733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.798796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.799038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.799102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.799307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.799373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.799616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.799689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.799909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.799987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.800277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.800339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.800594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.800657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.800909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.800984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.801240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.801303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.801587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.801650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.801866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.801930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.802172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.802237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.802525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.802588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.802884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.802948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.803260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.803324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.803572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.803636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.803922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.804002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.804247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.804311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.804565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.804627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.804880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.804943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.805273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.805337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.805636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.805700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.805917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.805996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.806251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.806315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.806565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.806628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.806919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.806997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.807278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.807341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.807627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.807691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.807984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.808048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.808338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.808401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.808647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.808709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.809011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.809077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.809323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.809388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.809639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.809702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.809982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.810047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.810300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.810363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.810640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.810703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.810946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.811021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.811307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.811371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.811625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.811690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.811931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.812008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.812263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.812326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.812617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.812680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.812920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.813016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.813289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.813353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.813596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.813659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.813950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.814031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.814327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.814390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.814696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.814759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.814994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.815059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.919 [2024-07-15 16:17:29.815307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.919 [2024-07-15 16:17:29.815372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.919 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.815587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.815653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.815894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.815972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.816230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.816293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.816588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.816651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.816917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.816994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.817277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.817340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.817623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.817686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.817982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.818045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.818284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.818348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.818639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.818703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.818974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.819037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.819342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.819405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.819694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.819757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.820007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.820072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.820315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.820381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.820692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.820755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.821040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.821106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.821341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.821403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.821646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.821709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.821947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.822030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.822291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.822363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.822645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.822708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.823009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.823073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.823361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.823424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.823723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.823786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.824018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.824082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.824382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.824445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.824688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.824753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.825035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.825100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.825357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.825421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.825677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.825740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.825992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.826056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.826344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.826407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.826691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.826753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.826980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.827043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.827266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.827332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.827622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.827686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.827888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.827951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.828231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.828294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.828584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.828647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.828884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.828947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.829269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.829332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.829556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.829619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.829818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.829880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.830149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.830213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.830478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.830542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.830828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.830891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.831204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.831277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.831524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.831588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.831877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.831939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.832248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.832310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.832556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.832620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.832843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.832906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.833169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.833235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.833530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.833594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.833869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.833932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.834166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.834230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.834525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.834587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.834883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.834947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.835173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.835239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.835478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.835542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.835834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.920 [2024-07-15 16:17:29.835898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.920 qpair failed and we were unable to recover it. 00:24:43.920 [2024-07-15 16:17:29.836175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.836241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.836464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.836527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.836823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.836887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.837177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.837243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.837458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.837522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.837812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.837875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.838158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.838222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.838469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.838533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.838817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.838880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.839183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.839247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.839495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.839561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.839811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.839876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.840179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.840243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.840511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.840575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.840827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.840890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.841205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.841268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.841521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.841584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.841850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.841914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.842195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.842259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.842473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.842536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.842781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.842845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.843128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.843192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.843449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.843513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.843808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.843870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.844133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.844198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.844486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.844549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.844855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.844918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.845187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.845252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.845496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.845560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.845808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.845871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.846176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.846241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.846464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.846527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.846774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.846837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.847087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.847152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.847454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.847518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.847724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.847788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.848025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.848091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.848372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.848435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.848695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.848758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.849013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.849077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.849369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.849432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.849674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.849737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.849982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.850046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.850304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.850367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.850610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.850673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.850971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.851036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.851316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.851380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.851632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.851695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.851912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.851990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.852246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.852310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.852555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.852618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.852876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.852938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.853268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.853332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.853616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.853687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.853923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.854006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.854236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.854299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.854542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.854604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.854842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.854906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.855167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.855233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.855522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.855585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.855838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.855901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.856167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.856235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.856474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.856538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.856778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.856841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.857127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.857192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.857476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.857540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.857792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.857855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.858099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.858163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.858452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.858515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.921 [2024-07-15 16:17:29.858808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.921 [2024-07-15 16:17:29.858871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.921 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.859176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.859240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.859470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.859533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.859821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.859884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.860115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.860179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.860425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.860489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.860748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.860812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.861108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.861173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.861377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.861443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.861723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.861787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.862087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.862152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.862421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.862494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.862755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.862818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.863072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.863136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.863420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.863484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.863752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.863814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.864101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.864165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.864434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.864498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.864714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.864777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.865061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.865126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.865385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.865449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.865705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.865769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.866041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.866105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.866375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.866438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.866714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.866777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.867737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.867808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.868093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.868138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.868297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.868340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.868518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.868560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.868740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.868783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.868921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.868989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.869236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.869278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.869453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.869496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.869670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.869713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.869866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.869908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.870089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.870134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.870334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.870377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.870526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.870569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.870773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.870823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.871002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.871046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.871228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.871271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.871478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.871521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.871689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.871723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.871840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.871874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.872073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.872117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.872276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.872310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.872451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.872485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.872623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.872682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.872936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.873021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.873166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.873202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.873475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.873539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.873820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.873874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.874135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.874169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.874328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.874364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.874476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.874511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.874639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.874675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.874852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.874888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.875065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.875099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.875320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.875355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.875555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.875598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.875742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.875812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.876049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.876082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.876207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.876257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.876372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.876407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.876551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.876586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.876875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.876918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.877080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.877115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.922 qpair failed and we were unable to recover it. 00:24:43.922 [2024-07-15 16:17:29.877288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.922 [2024-07-15 16:17:29.877351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.877544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.877608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.877909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.877988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.878200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.878234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.878409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.878442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.878615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.878648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.878795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.878849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.879109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.879143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.879264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.879297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.879459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.879493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.879764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.879827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.880092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.880126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.880286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.880359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.880612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.880676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.880951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.881035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.881177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.881211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.881449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.881491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.881693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.881746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.882017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.882051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.882171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.882203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.882350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.882383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.882528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.882561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.882840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.882903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.883110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.883144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.883273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.883315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.883478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.883546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.883845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.883908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.884108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.884142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.884305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.884339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.884638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.884700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.885016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.885051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.885176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.885211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.885470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.885532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.885818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.885880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.886077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.886111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.886276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.886309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.886594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.886657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.886891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.886966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.887140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.887174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.887332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.887405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.887644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.887707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.888006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.888058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.888227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.888294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.888589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.888652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.888941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.889022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.889169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.889203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.889370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.889441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.889695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.889758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.890019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.890053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.890172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.890206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.890345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.890388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.890554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.890590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.890734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.890769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.891020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.891054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.891176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.891210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.891378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.891411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.891729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.891792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.892041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.892074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.892219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.892268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.892541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.892604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.892860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.892924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.893136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.893170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.923 [2024-07-15 16:17:29.893380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.923 [2024-07-15 16:17:29.893443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.923 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.893725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.893787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.894022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.894056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.894178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.894211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.894432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.894504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.894765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.894799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.895036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.895070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.895190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.895224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.895369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.895403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.895686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.895748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.896000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.896034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.896188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.896221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.896354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.896387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.896552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.896609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.896780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.896853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.897090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.897124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.897329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.897393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.897680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.897742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.898057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.898091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.898238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.898272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.898521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.898554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.898666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.898700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.898862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.898925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.899145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.899179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.899338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.899374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.899643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.899706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.900001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.900054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.900175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.900209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.900354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.900388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.900592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.900657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.900922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.900964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.901105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.901139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.901260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.901294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.901435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.901469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.901662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.901695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.901901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.901980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.902149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.902183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.902371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.902434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:43.924 [2024-07-15 16:17:29.902668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.924 [2024-07-15 16:17:29.902732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:43.924 qpair failed and we were unable to recover it. 00:24:44.196 [2024-07-15 16:17:29.902942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.196 [2024-07-15 16:17:29.903030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.196 qpair failed and we were unable to recover it. 00:24:44.196 [2024-07-15 16:17:29.903188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.196 [2024-07-15 16:17:29.903222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.196 qpair failed and we were unable to recover it. 00:24:44.196 [2024-07-15 16:17:29.903461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.196 [2024-07-15 16:17:29.903526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.196 qpair failed and we were unable to recover it. 00:24:44.196 [2024-07-15 16:17:29.903811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.903875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.904080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.904114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.904275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.904318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.904480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.904523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.904730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.904793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.905068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.905103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.905220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.905254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.905374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.905408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.905639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.905681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.905873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.905907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.906035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.906069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.906213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.906247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.906508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.906570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.906849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.906912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.907170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.907233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.907525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.907588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.907820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.907853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.908113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.908177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.908390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.908454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.908719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.908782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.909023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.909090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.909355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.909419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.909628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.909692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.909947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.909997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.910165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.910208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.910393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.910427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.910568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.910603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.910821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.910885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.914136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.914192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.914444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.914490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.914738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.914792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.915043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.915110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.915370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.915404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.915571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.915637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.915865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.915908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.916139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.916202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.916464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.916529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.916756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.916820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.917110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.917176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.917470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.917534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.197 [2024-07-15 16:17:29.917814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.197 [2024-07-15 16:17:29.917849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.197 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.917999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.918035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.918234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.918298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.918593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.918657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.918946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.918986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.919096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.919130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.919411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.919454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.919597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.919640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.919927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.920009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.920271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.920314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.920515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.920586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.920837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.920901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.921198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.921298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.921564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.921631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.921850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.921915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.922195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.922258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.922436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.922498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.922753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.922828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.923138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.923203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.923452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.923516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.923765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.923832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.924090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.924154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.924441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.924505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.924768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.924811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.924996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.925031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.925170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.925203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.925310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.925344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.925515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.925571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.925767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.925830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.926044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.926109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.926403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.926466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.926699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.926765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.926988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.927055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.927309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.927373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.927628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.927662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.927805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.927838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.927992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.928028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.928209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.928252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.928404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.928458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.928594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.928629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.928828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.928891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.198 qpair failed and we were unable to recover it. 00:24:44.198 [2024-07-15 16:17:29.929205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.198 [2024-07-15 16:17:29.929269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.929524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.929567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.929744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.929779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.929969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.930005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.930245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.930311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.930583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.930618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.930784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.930834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.931089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.931155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.931355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.931421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.931710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.931774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.932030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.932065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.932340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.932403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.932617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.932682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.932919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.932953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.933131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.933165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.933430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.933492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.933770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.933844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.934103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.934139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.934313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.934348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.934577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.934640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.934918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.934996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.935232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.935267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.935392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.935426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.935594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.935661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.935973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.936037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.936326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.936389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.936669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.936731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.937004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.937047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.937253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.937295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.937468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.937510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.937691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.937754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.938034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.938100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.938402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.938465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.938761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.938824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.939072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.939137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.939402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.939468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.939721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.939785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.940016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.940052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.940223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.940275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.940486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.940552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.940798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.199 [2024-07-15 16:17:29.940864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.199 qpair failed and we were unable to recover it. 00:24:44.199 [2024-07-15 16:17:29.941155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.941190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.941323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.941360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.941589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.941655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.941873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.941939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.942170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.942234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.942518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.942581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.942791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.942833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.943010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.943053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.943321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.943383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.943636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.943701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.943977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.944042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.944326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.944389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.944643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.944676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.944782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.944816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.944920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.944953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.945163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.945238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.945456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.945519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.945807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.945849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.946027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.946070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.946239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.946273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.946404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.946442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.946683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.946746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.946961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.947006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.947191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.947225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.947401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.947434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.947688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.947751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.948037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.948101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.948359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.948392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.948557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.948590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.948811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.948884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.949106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.949173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.949465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.949528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.949818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.949860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.950002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.950073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.200 [2024-07-15 16:17:29.950332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.200 [2024-07-15 16:17:29.950395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.200 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.950668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.950702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.950862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.950897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.951079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.951149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.951408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.951470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.951711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.951745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.951867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.951900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.952144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.952207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.952510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.952554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.952765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.952841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.953155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.953221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.953439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.953504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.953790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.953825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.954006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.954042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.954201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.954274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.954520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.954582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.954862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.954924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.955191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.955253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.955542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.955584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.955757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.955799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.956063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.956130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.956392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.956433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.956588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.956623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.956864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.956927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.957232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.957295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.957581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.957644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.957927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.958005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.958293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.958356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.958589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.958630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.958801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.958843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.959067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.959133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.959354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.959418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.959703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.201 [2024-07-15 16:17:29.959766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.201 qpair failed and we were unable to recover it. 00:24:44.201 [2024-07-15 16:17:29.960054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.960118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.960359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.960424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.960698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.960741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.961009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.961074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.961407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.961470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.961707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.961772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.962045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.962089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.962298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.962361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.962637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.962699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.962925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.963002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.963247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.963311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.963537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.963570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.963701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.963734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.963870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.963903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.964155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.964218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.964475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.964538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.964724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.964786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.965072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.965136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.965438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.965501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.965753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.965787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.965904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.965938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.966130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.966164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.966330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.966363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.966568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.966630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.966866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.966900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.967041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.967074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.967216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.967264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.967446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.967510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.967797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.967869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.968223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.968256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.968420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.968453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.968705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.968768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.969058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.969092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.969208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.969243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.969382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.969415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.969658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.969721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.969976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.202 [2024-07-15 16:17:29.970040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.202 qpair failed and we were unable to recover it. 00:24:44.202 [2024-07-15 16:17:29.970280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.970343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.970607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.970649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.970821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.970855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.971049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.971112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.971378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.971421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.971599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.971641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.971880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.971943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.972205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.972240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.972387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.972422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.972640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.972702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.972993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.973036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.973251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.973328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.973541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.973613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.973862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.973895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.974036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.974071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.974182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.974216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.974375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.974417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.974647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.974689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.974894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.974988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.975200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.975236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.975363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.975410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.975566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.975616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.975868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.975931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.976140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.976207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.976491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.976553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.976807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.976871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.977202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.977277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.977538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.977602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.977949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.977989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.978112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.978145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.978397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.978462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.978703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.978785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.203 qpair failed and we were unable to recover it. 00:24:44.203 [2024-07-15 16:17:29.979050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.203 [2024-07-15 16:17:29.979114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.979404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.979448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.979590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.979631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.979805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.979847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.980050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.980126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.980418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.980479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.980733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.980767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.980914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.980963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.981206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.981275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.981547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.981610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.981893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.981976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.982272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.982335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.982642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.982706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.982947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.983043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.983291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.983354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.983643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.983685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.983889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.983982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.984283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.984346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.984632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.984695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.984979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.985013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.985158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.985192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.985454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.985520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.985780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.985843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.986168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.986234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.986479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.986544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.986798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.986861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.987131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.987166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.987315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.987348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.987464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.987497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.987658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.987719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.987982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.988032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.988204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.988278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.988505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.988539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.988704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.988767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.989031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.989095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.989345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.989409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.989701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.989763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.990018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.990081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.990314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.990377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.204 [2024-07-15 16:17:29.990590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.204 [2024-07-15 16:17:29.990679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.204 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.990886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.990951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.991241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.991304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.991567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.991609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.991786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.991828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.992041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.992103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.992334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.992398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.992646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.992708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.992967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.993032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.993266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.993328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.993568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.993631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.993873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.993937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.994257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.994320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.994517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.994582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.994868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.994932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.995212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.995276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.995532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.995565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.995677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.995711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.995904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.995992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.996287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.996351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.996636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.996670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.996833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.996911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.997180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.997243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.997495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.997558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.997814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.997875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.998167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.998209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.998405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.998480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.998768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.998807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.998929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.998981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.999184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.999251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.999542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.999604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:29.999906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:29.999952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:30.000115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:30.000150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:30.000383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:30.000445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:30.000701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:30.000763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:30.001016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:30.001055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:30.001284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.205 [2024-07-15 16:17:30.001378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.205 qpair failed and we were unable to recover it. 00:24:44.205 [2024-07-15 16:17:30.001664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.001749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.002099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.002184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.002488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.002569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.002860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.002943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.003295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.003385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.003685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.003768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.004057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.004103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.004341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.004425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.004762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.004844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.005152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.005233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.005538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.005581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.005764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.005856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.006124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.006207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.006516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.006602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.006905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.006970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.007142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.007194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.007372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.007431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.007625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.007685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.007860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.007916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.008148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.008234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.008533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.008610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.008823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.008887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.009155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.009220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.009412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.009469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.009695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.009776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.010053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.010120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.010394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.010475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.010694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.010763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.011017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.011082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.011296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.011371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.011621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.011698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.011951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.012057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.012301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.012367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.012617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.012697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.206 qpair failed and we were unable to recover it. 00:24:44.206 [2024-07-15 16:17:30.012918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.206 [2024-07-15 16:17:30.013001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.013260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.013327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.013574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.013644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.013901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.013983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.014271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.014349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.014637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.014704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.014953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.015038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.015292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.015360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.015573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.015640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.015857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.015911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.016146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.016197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.016383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.016430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.016594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.016651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.016842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.016890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.017112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.017160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.017341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.017390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.017586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.017636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.017813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.017860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.018048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.018095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.018251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.018299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.018541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.018590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.018758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.018804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.018998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.019048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.019253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.019301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.019536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.019587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.019827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.019878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.020061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.020109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.020317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.020376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.020602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.020652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.020888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.020937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.021144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.021199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.021410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.021457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.021621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.021669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.021905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.021978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.022184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.022234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.022452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.022502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.022704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.022762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.022967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.207 [2024-07-15 16:17:30.023015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.207 qpair failed and we were unable to recover it. 00:24:44.207 [2024-07-15 16:17:30.023216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.023264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.023458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.023523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.023734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.023781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.023949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.024005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.024207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.024254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.024451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.024506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.024682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.024729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.024971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.025018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.025215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.025262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.025428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.025475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.025667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.025711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.025920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.025978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.026178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.026226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.026426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.026469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.026620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.026663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.026862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.026905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.027100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.027145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.027339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.027385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.027595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.027640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.027820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.027869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.028063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.028110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.028274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.028318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.028513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.028558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.028725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.028771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.029008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.029055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.029227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.029272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.029478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.029524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.030840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.030890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.030998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.031025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.031195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.031246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.031463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.031490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.031580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.031604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.031727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.031753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.031848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.031874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.031965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.031991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.032080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.032105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.208 qpair failed and we were unable to recover it. 00:24:44.208 [2024-07-15 16:17:30.032218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.208 [2024-07-15 16:17:30.032245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.032357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.032382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.032470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.032507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.032603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.032629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.032779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.032806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.032905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.032931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.033025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.033050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.033144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.033169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.033299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.033326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.033448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.033472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.033565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.033595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.033709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.033735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.033849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.033875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.033990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.034017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.034156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.034181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.034302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.034328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.034477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.034502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.034615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.034642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.034726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.034751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.034860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.034885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.034976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.035001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.035099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.035125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.035275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.035300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.035411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.035436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.035548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.035574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.035688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.035713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.035806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.035831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.035917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.035942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.036044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.036070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.036171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.036196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.036320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.036346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.036456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.036482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.036573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.036600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.036687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.036713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.036796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.036820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.036910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.036934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.037056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.037081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.037219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.037245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.209 qpair failed and we were unable to recover it. 00:24:44.209 [2024-07-15 16:17:30.037340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.209 [2024-07-15 16:17:30.037364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.037508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.037534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.037621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.037645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.037736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.037761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.037849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.037878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.037994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.038025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.038143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.038168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.038249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.038273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.038366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.038391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.038508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.038533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.038652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.038678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.038791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.038816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.038926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.038951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.039074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.039100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.039218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.039243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.039374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.039400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.039510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.039535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.039655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.039681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.039773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.039800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.039945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.039977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.040075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.040101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.040196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.040221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.040358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.040383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.040503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.040529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.040642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.040670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.040787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.040813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.040974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.041001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.041145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.041171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.041290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.210 [2024-07-15 16:17:30.041315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.210 qpair failed and we were unable to recover it. 00:24:44.210 [2024-07-15 16:17:30.041399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.041424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.041518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.041542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.041636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.041662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.041780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.041806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.041927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.041952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.042073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.042098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.042185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.042214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.042307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.042332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.042474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.042501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.042595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.042621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.042705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.042734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.042854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.042880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.043004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.043031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.043122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.043147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.043265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.043291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.043437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.043466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.043605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.043631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.043773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.043799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.043919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.043945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.044066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.044092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.044232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.044258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.044346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.044372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.044488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.044514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.044624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.044649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.044745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.044773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.044869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.044895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.044997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.045028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.045144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.045170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.045287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.045312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.045437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.045463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.045581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.045607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.045695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.045722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.045843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.045869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.045969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.045995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.046115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.046140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.046249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.046276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.046387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.046413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.046521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.046547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.046658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.046684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.211 qpair failed and we were unable to recover it. 00:24:44.211 [2024-07-15 16:17:30.046800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.211 [2024-07-15 16:17:30.046826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.046962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.046990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.047082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.047109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.047197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.047224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.047370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.047396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.047520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.047547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.047626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.047650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.047743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.047768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.047915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.047953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.048081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.048107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.048185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.048209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.048319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.048345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.048463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.048490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.048609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.048635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.048753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.048779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.048892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.048918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.049011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.049041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.049151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.049177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.049281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.049306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.049424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.049452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.049555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.049581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.049691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.049718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.049837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.049863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.049990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.050017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.050097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.050123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.050213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.050240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.050363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.050389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.050512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.050538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.050664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.050690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.050780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.050805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.050968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.050994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.051090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.051115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.051198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.051224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.051310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.051336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.051447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.051474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.051580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.051606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.051729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.051754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.212 qpair failed and we were unable to recover it. 00:24:44.212 [2024-07-15 16:17:30.051878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.212 [2024-07-15 16:17:30.051905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.052005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.052032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.052152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.052178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.052299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.052325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.052414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.052439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.052589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.052615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.052701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.052726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.052819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.052844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.053001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.053029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.053114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.053138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.053231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.053256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.053353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.053378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.053470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.053495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.053638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.053664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.053775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.053801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.053896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.053921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.054035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.054063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.054206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.054232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.054377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.054403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.054490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.054520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.054612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.054639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.054736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.054762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.054857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.054882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.055018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.055044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.055182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.055208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.055295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.055321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.055441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.055467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.055590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.055616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.055709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.055734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.055849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.055881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.056014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.056041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.056157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.056183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.056279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.056305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.056409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.056436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.056536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.056562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.056680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.056706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.056793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.056819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.056907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.056933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.057054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.213 [2024-07-15 16:17:30.057081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.213 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-15 16:17:30.057167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.057194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.057289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.057315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.057402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.057428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.057510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.057536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.057620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.057647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.057748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.057774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.057893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.057919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.058062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.058104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.058201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.058229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.058379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.058405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.058518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.058545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.058667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.058695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.058808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.058834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.059001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.059036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.059147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.059180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.059322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.059354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.059521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.059568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.059663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.059690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.059832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.059859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.059978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.060005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.060126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.060154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.060305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.060331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.060448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.060474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.060568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.060594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.060688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.060715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.060826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.060852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.060966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.060995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.061114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.061140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.061227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.061253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.061375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.061401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.061516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.061542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.061657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.061684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.061773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.061799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.061921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.061947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.062083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.062109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.062241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.062295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.062429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.062475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-15 16:17:30.062566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.214 [2024-07-15 16:17:30.062592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.062734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.062760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.062854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.062882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.063024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.063051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.063136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.063163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.063302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.063346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.063440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.063467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.063613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.063639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.063756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.063783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.063894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.063920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.064047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.064098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.064215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.064264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.064353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.064380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.064501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.064528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.064688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.064714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.064852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.064878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.065015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.065065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.065190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.065238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.065380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.065424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.065518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.065544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.065685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.065711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.065832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.065867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.065988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.066016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.066153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.066179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.066298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.066345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.066461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.066487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.066582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.066608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.066750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.066776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.066872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.066899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.067000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.067027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.067142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.215 [2024-07-15 16:17:30.067168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-15 16:17:30.067287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.067313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.067439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.067465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.067558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.067584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.067701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.067726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.067822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.067847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.067962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.067989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.068135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.068162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.068290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.068316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.068395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.068421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.068544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.068570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.068683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.068709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.068842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.068883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.069013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.069041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.069161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.069187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.069333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.069368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.069515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.069549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.069701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.069735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.069878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.069904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.070050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.070076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.070193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.070234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.070411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.070446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.070553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.070587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.070800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.070834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.071017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.071043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.071131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.071157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.071270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.071303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.071499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.071531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.071672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.071704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.071813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.071846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.072021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.072048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.072165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.072192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.072347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.072381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.072618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.072653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.072806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.072840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.072977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.073003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.073145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.073171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.073265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.073291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.073447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.073480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.073634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.073668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.216 qpair failed and we were unable to recover it. 00:24:44.216 [2024-07-15 16:17:30.073810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.216 [2024-07-15 16:17:30.073844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.073974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.074019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.074161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.074187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.074326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.074361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.074535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.074569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.074780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.074815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.075026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.075053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.075198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.075228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.075364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.075390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.075508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.075542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.075651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.075685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.075839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.075865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.075977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.076004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.076100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.076126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.076227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.076262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.076408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.076443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.076560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.076594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.076705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.076740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.076858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.076893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.077058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.077085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.077178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.077221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.077395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.077450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.077584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.077621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.077767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.077803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.077949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.078010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.078102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.078129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.078229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.078256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.078398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.078444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.078568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.078603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.078723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.078767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.078885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.078923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.079078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.079105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.079252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.079287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.079419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.079454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.079566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.079609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.079738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.079774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.079890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.079924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.080052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.080079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.080169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.080195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.080310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.080346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.080494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.217 [2024-07-15 16:17:30.080530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.217 qpair failed and we were unable to recover it. 00:24:44.217 [2024-07-15 16:17:30.080642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.080679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.080827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.080863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.081021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.081048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.081147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.081189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.081315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.081365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.081458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.081484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.081563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.081590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.081709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.081735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.081845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.081871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.081966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.081994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.082091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.082118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.082235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.082262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.082351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.082379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.082491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.082518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.082603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.082630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.082727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.082753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.082898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.082924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.083050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.083077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.083165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.083192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.083297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.083346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.083505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.083552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.083640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.083666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.083744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.083769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.083892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.083918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.084038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.084064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.084179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.084206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.084292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.084317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.084410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.084437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.084525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.084551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.084667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.084694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.084786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.084811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.084952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.084983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.085097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.085124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.085244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.085274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.085362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.085388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.085510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.085536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.085631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.085661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.085770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.085810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.085923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.085951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.218 qpair failed and we were unable to recover it. 00:24:44.218 [2024-07-15 16:17:30.086092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.218 [2024-07-15 16:17:30.086127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.086269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.086302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.086418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.086452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.086610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.086644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.086792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.086818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.086924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.086949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.087095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.087130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.087272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.087306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.087456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.087491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.087675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.087724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.087825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.087852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.087976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.088003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.088123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.088169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.088313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.088339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.088430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.088456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.088601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.088629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.088742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.088768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.088858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.088884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.089022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.089057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.089209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.089243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.089347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.089381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.089550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.089600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.089718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.089745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.089858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.089884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.089996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.090023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.090117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.090143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.090226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.090252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.090344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.090371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.090482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.090508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.090628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.090654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.090743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.090768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.090858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.090883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.091009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.091035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.091147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.219 [2024-07-15 16:17:30.091181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.219 qpair failed and we were unable to recover it. 00:24:44.219 [2024-07-15 16:17:30.091304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.091337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.091497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.091533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.091724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.091771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.091865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.091892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.092035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.092083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.092224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.092269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.092376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.092411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.092556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.092598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.092737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.092764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.092921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.092968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.093119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.093147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.093298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.093332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.093510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.093545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.093686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.093734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.093885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.093917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.094015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.094041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.094155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.094200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.094382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.094416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.094566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.094600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.094747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.094781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.094904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.094933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.095049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.095088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.095197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.095224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.095387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.095421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.095542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.095590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.095752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.095787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.095907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.095941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.096096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.096135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.096302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.096351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.096522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.096574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.096699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.096725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.096835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.096861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.096950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.096984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.097126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.097152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.097276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.097302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.097416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.097442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.097560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.097585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.097672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.097697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.220 [2024-07-15 16:17:30.097809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.220 [2024-07-15 16:17:30.097835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.220 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.097975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.098001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.098139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.098186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.098371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.098417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.098559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.098584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.098726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.098752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.098866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.098893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.099046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.099093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.099228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.099273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.099409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.099458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.099572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.099597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.099694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.099720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.099837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.099862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.100012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.100038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.100126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.100152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.100278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.100304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.100449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.100491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.100659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.100687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.100790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.100829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.100969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.101008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.101136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.101164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.101348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.101385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.101536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.101572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.101696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.101733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.101877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.101903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.102002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.102029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.102137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.102163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.102293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.102329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.102503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.102557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.102747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.102785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.102912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.102939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.103028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.103055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.103142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.103168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.103288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.103314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.103427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.103469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.103594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.103632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.103791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.103827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.103986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.104035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.104146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.104172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.221 qpair failed and we were unable to recover it. 00:24:44.221 [2024-07-15 16:17:30.104305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.221 [2024-07-15 16:17:30.104331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.104416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.104442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.104556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.104592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.104766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.104802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.104951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.105031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.105146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.105172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.105265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.105291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.105436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.105479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.105630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.105666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.105871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.105907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.106073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.106102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.106198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.106224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.106342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.106392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.106549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.106587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.106731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.106768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.106934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.106979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.107067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.107094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.107185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.107213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.107393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.107421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.107533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.107569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.107745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.107782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.107902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.107928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.108057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.108083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.108170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.108196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.108352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.108378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.108503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.108530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.108699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.108737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.108893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.108920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.109015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.109042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.109185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.109212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.109313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.109340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.109436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.109482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.109614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.109651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.109792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.109828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.109981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.110029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.110125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.110152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.110271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.110296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.110381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.110407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.110556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.110593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.222 qpair failed and we were unable to recover it. 00:24:44.222 [2024-07-15 16:17:30.110758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.222 [2024-07-15 16:17:30.110794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.110976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.111024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.111118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.111143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.111270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.111296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.111409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.111446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.111598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.111640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.111821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.111857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.111984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.112040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.112157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.112183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.112325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.112361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.112517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.112553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.112681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.112718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.112867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.112903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.113058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.113087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.113173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.113199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.113326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.113362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.113509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.113546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.113673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.113710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.113864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.113901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.114040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.114067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.114178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.114203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.114320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.114347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.114465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.114510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.114669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.114704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.114868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.114903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.115090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.115115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.115203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.115248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.115373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.115399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.115544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.115580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.115760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.115797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.115942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.115975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.116068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.116094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.116229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.116293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.116474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.116514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.116637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.116677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.116800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.116839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.223 [2024-07-15 16:17:30.117049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.223 [2024-07-15 16:17:30.117109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.223 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.117236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.117263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.117425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.117451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.117567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.117619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.117781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.117828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.117944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.117978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.118078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.118105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.118219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.118246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.118337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.118364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.118467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.118499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.118621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.118647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.118754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.118780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.118861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.118887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.118989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.119016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.119103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.119129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.119216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.119242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.119338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.119363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.119481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.119507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.119595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.119621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.119709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.119734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.119847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.119873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.119985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.120025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.120154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.120182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.120308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.120335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.120423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.120450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.120538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.120564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.120654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.120680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.120770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.120797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.120885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.120913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.121011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.121039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.121154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.121192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.121348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.121396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.121518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.121556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.121736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.121772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.224 qpair failed and we were unable to recover it. 00:24:44.224 [2024-07-15 16:17:30.121889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.224 [2024-07-15 16:17:30.121928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.122099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.122126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.122273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.122324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.122502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.122550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.122670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.122723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.122813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.122839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.122960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.122986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.123136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.123184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.123357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.123406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.123529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.123555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.123669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.123694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.123789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.123819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.123971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.123998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.124115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.124141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.124271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.124307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.124431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.124474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.124620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.124657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.124802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.124839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.124964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.125012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.125100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.125127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.125241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.125279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.125444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.125482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.125629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.125666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.125814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.125842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.125933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.125967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.126085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.126110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.126223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.126270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.126412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.126461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.126608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.126654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.126748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.126774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.126857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.126883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.126995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.127021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.127134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.127160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.127250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.127278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.127369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.127395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.127532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.127558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.225 [2024-07-15 16:17:30.127673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.225 [2024-07-15 16:17:30.127699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.225 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.127839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.127864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.127985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.128012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.128103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.128128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.128221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.128246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.128332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.128358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.128502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.128528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.128630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.128670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.128779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.128808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.128897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.128923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.129023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.129051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.129139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.129166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.129298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.129324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.129471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.129509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.129667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.129718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.129843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.129881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.130128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.130155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.130355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.130393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.130578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.130616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.130750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.130814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.130961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.131020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.131134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.131160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.131288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.131325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.131448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.131488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.131653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.131691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.131841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.131867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.131988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.132016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.132099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.132126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.132257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.132294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.132445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.132482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.132589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.132627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.132775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.132817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.132929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.132962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.226 qpair failed and we were unable to recover it. 00:24:44.226 [2024-07-15 16:17:30.133069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.226 [2024-07-15 16:17:30.133095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.133200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.133245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.133431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.133469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.133623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.133662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.133785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.133811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.133906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.133933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.134052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.134078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.134203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.134272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.134417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.134462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.134570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.134609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.134741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.134768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.134897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.134938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.135055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.135083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.135209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.135241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.135387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.135413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.135550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.135587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.135779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.135842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.135996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.136024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.136142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.136169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.136321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.136358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.136479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.136530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.136716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.136754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.136909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.136989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.137130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.137155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.137280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.137333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.137494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.137534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.137691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.137730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.137901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.137937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.138098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.138124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.138213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.138240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.138353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.138379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.138537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.138575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.138731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.138781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.138913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.138950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.139078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.139103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.139238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.139275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.139404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.139440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.139563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.227 [2024-07-15 16:17:30.139588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.227 qpair failed and we were unable to recover it. 00:24:44.227 [2024-07-15 16:17:30.139757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.139795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.139966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.140018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.140143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.140169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.140315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.140355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.140512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.140553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.140741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.140781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.140935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.140965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.141080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.141106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.141222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.141247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.141377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.141416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.141572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.141610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.141758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.141795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.141971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.142011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.142111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.142138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.142276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.142325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.142463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.142525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.142666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.142714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.142800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.142827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.142943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.142974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.143116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.143163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.143312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.143362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.143513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.143556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.143723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.143761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.143887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.143925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.144079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.144105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.144273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.144312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.144455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.144495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.144659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.144697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.144830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.144869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.145043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.145070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.145213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.145252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.145416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.145454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.145601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.145639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.145806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.145846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.228 qpair failed and we were unable to recover it. 00:24:44.228 [2024-07-15 16:17:30.145982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.228 [2024-07-15 16:17:30.146028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.146146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.146172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.146268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.146294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.146377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.146402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.146541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.146580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.146750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.146791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.146933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.146964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.147056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.147084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.147174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.147200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.147327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.147353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.147525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.147564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.147741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.147780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.148013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.148039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.148124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.148149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.148304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.148345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.148476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.148516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.148683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.148721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.148875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.148901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.149023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.149049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.149169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.149194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.149390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.149429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.149588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.149638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.149776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.149816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.150021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.150048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.150170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.150197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.150363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.150405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.150569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.150607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.150737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.150778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.150948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.151017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.151139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.151164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.151281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.151308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.151492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.151533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.229 [2024-07-15 16:17:30.151714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.229 [2024-07-15 16:17:30.151765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.229 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.151966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.152012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.152096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.152122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.152267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.152294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.152387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.152413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.152551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.152590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.152728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.152782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.152942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.153006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.153103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.153128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.153226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.153251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.153338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.153364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.153503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.153543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.153799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.153838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.154004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.154031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.154141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.154166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.154255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.154282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.154430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.154473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.154647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.154686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.154809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.154854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.155005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.155032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.155139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.155164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.155304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.155330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.155422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.155471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.155593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.155633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.155753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.155807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.155977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.156024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.156140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.156166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.156243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.156269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.156359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.156384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.156492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.156522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.156691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.156729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.156867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.230 [2024-07-15 16:17:30.156907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.230 qpair failed and we were unable to recover it. 00:24:44.230 [2024-07-15 16:17:30.157049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.157077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.157193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.157220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.157359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.157399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.157564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.157605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.157791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.157855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.158050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.158078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.158198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.158248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.158429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.158469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.158601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.158642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.158849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.158912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.159091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.159118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.159253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.159294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.159458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.159498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.159631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.159671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.159835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.159874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.160071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.160098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.160193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.160220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.160304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.160330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.160446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.160498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.160647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.160702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.160865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.160891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.161007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.161034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.161144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.161170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.161293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.161319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.161466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.161505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.161738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.161778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.161982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.162025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.162170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.162196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.162334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.162359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.162545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.162586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.162791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.162831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.162960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.162987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.163107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.163134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.163300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.163340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.163466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.163505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.163675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.163717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.163881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.163922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.164142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.164190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.164356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.164398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.164564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.164605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.231 [2024-07-15 16:17:30.164739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.231 [2024-07-15 16:17:30.164779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.231 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.164904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.164944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.165150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.165191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.165318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.165359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.165524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.165564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.165723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.165763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.165923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.165971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.166151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.166193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.166350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.166390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.166528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.166568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.166770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.166809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.166945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.166994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.167131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.167171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.167341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.167381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.167524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.167564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.167740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.167783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.167969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.168012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.168172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.168214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.168379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.168421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.168598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.168640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.168848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.168887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.169056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.169097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.169294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.169335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.169509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.169550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.169695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.169736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.169928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.169982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.170123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.170164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.170308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.170349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.170502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.170544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.170671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.170712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.170890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.170930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.171098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.171141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.171317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.171361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.171508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.171550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.171752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.171795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.171930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.171983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.172120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.232 [2024-07-15 16:17:30.172161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.232 qpair failed and we were unable to recover it. 00:24:44.232 [2024-07-15 16:17:30.172365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.172414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.172582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.172625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.172825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.172868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.173043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.173087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.173219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.173260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.173398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.173443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.173587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.173630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.173803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.173845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.174015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.174058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.174195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.174238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.174374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.174415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.174585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.174627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.174780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.174821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.174984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.175028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.175208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.175251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.175390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.175431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.175612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.175652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.175823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.175863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.176042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.176085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.176231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.176275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.176457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.176500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.176679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.176720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.176852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.176894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.177054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.177097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.177236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.177278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.177453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.177494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.177668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.177710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.177891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.177933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.178117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.178162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.178316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.178358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.178560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.178602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.178743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.178786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.178983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.179026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.179160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.179201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.179385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.179428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.233 [2024-07-15 16:17:30.179558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.233 [2024-07-15 16:17:30.179601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.233 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.179758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.179799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.179944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.179998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.180201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.180244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.180422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.180465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.180638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.180687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.180831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.180873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.181010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.181054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.181229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.181273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.181422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.181466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.181672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.181716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.181892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.181937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.182125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.182169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.182315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.182360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.182561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.182606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.182794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.182838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.183020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.183065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.183218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.183263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.183395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.183438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.183612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.183656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.183855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.183909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.184113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.184158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.184332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.184376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.184530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.184573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.184790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.184833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.234 [2024-07-15 16:17:30.184982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.234 [2024-07-15 16:17:30.185026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.234 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.185206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.185252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.185474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.185519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.185705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.185750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.185923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.185993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.186146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.186193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.186381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.186427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.186619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.186664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.186851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.186896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.187088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.187135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.187310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.187355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.187534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.187580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.187774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.187819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.187972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.188019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.188197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.188243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.188390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.188435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.188647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.188691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.188855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.188910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.189151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.189197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.189332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.189376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.189563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.189619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.189804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.189851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.190062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.190108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.190256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.190301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.190513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.190558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.190764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.190818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.191018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.191065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.191250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.191295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.191442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.191487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.191664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.191710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.191923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.192005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.192211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.192256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.192411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.192455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.192610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.192654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.192862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.192918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.193192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.193247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.193509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.193563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.193773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.193830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.194073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.194148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.194352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.194423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.194625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.194673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.194834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.194879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.195072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.195120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.195279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.195324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.195474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.195518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.195668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.195737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.195915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.195967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.196192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.196237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.196418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.196463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.196623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.196667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.196876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.196930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.197099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.197144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.197323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.508 [2024-07-15 16:17:30.197369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.508 qpair failed and we were unable to recover it. 00:24:44.508 [2024-07-15 16:17:30.197538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.197583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.197725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.197796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.198001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.198047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.198237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.198282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.198462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.198509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.198719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.198766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.198953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.199012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.199159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.199214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.199407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.199454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.199635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.199682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.199874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.199924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.200136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.200185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.200412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.200460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.200645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.200694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.200843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.200893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.201092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.201142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.201348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.201396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.201624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.201671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.201926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.202008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.202214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.202260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.202422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.202473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.202675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.202725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.202915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.202975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.203140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.203187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.203370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.203417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.203604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.203650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.203813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.203860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.204026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.204073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.204263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.204313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.204467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.204516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.204722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.204770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.204965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.205014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.205180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.205229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.205392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.205439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.205634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.205681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.205840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.205908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.206144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.206192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.206350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.206397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.206577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.206624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.206844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.206892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.207103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.207151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.207357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.207403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.207575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.207623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.207852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.207899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.208097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.208145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.208367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.208414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.208635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.208683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.208865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.208928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.209127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.209177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.209375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.209422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.209639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.209687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.209890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.209945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.210186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.210235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.210392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.210441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.210595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.210644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.210896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.210950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.211178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.211226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.211448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.211495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.211646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.211696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.211891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.211939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.212120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.212169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.212379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.212426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.212605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.212652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.212907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.212976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.213158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.213209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.213366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.213416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.213606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.509 [2024-07-15 16:17:30.213657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.509 qpair failed and we were unable to recover it. 00:24:44.509 [2024-07-15 16:17:30.213867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.213922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.214194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.214242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.214407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.214454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.214612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.214657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.214836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.214882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.215116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.215163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.215361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.215407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.215636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.215682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.215884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.215935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.216167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.216213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.216436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.216483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.216699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.216745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.216980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.217029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.217190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.217237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.217433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.217479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.217670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.217718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.217913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.217973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.218152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.218200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.218402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.218453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.218645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.218697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.218897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.218967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.219218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.219269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.219469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.219521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.219757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.219808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.220000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.220052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.220278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.220329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.220515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.220564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.220797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.220845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.221045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.221094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.221245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.221293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.221524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.221580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.221831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.221882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.222177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.222255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.222478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.222554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.222801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.222852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.223049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.223122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.223399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.223471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.223713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.223763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.223942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.224024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.224231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.224301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.224589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.224662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.224855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.224906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.225186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.225261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.225534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.225609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.225829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.225882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.226093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.226168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.226444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.226517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.226751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.226803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.227032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.227107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.227376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.227450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.227678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.227731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.227966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.228038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.228297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.228370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.228591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.228666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.228897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.228947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.229256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.229338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.229596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.229668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.229891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.229944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.510 qpair failed and we were unable to recover it. 00:24:44.510 [2024-07-15 16:17:30.230201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.510 [2024-07-15 16:17:30.230252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.230448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.230525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.230773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.230855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.231078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.231131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.231335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.231406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.231656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.231727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.231949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.232033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.232285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.232357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.232602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.232674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.232880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.232930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.233118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.233197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.233379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.233450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.233694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.233770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.233965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.234038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.234255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.234329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.234638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.234711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.234943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.235025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.235291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.235364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.235608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.235665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.235891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.235941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.236200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.236274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.236497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.236554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.236733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.236784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.237016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.237072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.237246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.237317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.237557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.237633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.237852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.237906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.238106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.238158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.238356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.238408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.238613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.238666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.238861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.238916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.239189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.239240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.239448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.239500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.239760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.239815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.240057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.240130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.240374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.240427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.240659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.240714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.240948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.241009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.241222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.241272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.241484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.241534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.241733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.241783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.242031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.242083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.242307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.242375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.242630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.242686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.242856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.242910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.243147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.243202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.243420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.243473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.243689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.243743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.243970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.244027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.244220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.244276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.244520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.244574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.244804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.244860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.245073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.245129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.245344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.245398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.245586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.245642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.245889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.245942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.246192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.246249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.246462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.246517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.246677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.246732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.246949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.247019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.247282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.247336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.247557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.247612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.247821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.247876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.248086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.248142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.248370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.248424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.248667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.248721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.248980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.249036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.511 qpair failed and we were unable to recover it. 00:24:44.511 [2024-07-15 16:17:30.249238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.511 [2024-07-15 16:17:30.249290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.249469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.249523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.249729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.249791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.250021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.250078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.250322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.250396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.250677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.250753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.250941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.251009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.251190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.251246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.251460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.251534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.251752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.251807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.252047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.252126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.252319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.252374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.252565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.252619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.252866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.252920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.253220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.253275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.253469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.253523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.253749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.253803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.253993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.254048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.254307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.254381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.254654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.254729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.254979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.255033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.255216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.255270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.255500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.255571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.255752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.255808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.256046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.256120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.256337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.256390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.256606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.256660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.256868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.256922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.257190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.257265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.257545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.257617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.257796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.257850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.258059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.258114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.258374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.258447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.258706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.258777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.259009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.259065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.259230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.259287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.259497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.259552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.259768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.259823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.260025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.260081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.260259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.260315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.260514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.260568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.260782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.260836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.261059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.261124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.261379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.261433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.261652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.261705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.261970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.262025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.262295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.262367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.262574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.262651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.262898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.262953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.263207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.263280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.263531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.263603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.263825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.263879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.264270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.264346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.264539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.264594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.264810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.264864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.265059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.265115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.265314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.265370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.265623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.265697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.265918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.265988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.266179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.266233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.266474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.266550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.266810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.266864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.267135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.267192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.267409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.267464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.267686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.267740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.267922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.267998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.268247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.268323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.268541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.512 [2024-07-15 16:17:30.268615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.512 qpair failed and we were unable to recover it. 00:24:44.512 [2024-07-15 16:17:30.268825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.268879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.269153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.269227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.269423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.269493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.269671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.269725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.269980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.270035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.270284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.270355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.270569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.270640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.270880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.270935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.271189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.271263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.271483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.271538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.271795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.271849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.272085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.272164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.272356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.272433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.272641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.272716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.272980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.273044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.273277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.273351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.273626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.273700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.273917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.273987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.274231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.274312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.274592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.274663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.274913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.274995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.275275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.275349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.275578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.275653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.275870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.275926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.276210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.276293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.276527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.276599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.276776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.276832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.277043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.277115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.277375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.277446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.277644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.277698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.277918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.277989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.278276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.278350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.278623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.278696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.278929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.278997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.279267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.279339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.279546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.279620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.279833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.279890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.280132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.280206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.280463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.280534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.280733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.280787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.281000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.281057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.281304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.281377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.281602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.281674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.281919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.281984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.282261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.282335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.282541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.282613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.282795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.282850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.283075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.283149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.283406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.283478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.283753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.283827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.284037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.284111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.284348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.284422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.284632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.284688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.284940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.285008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.285260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.285340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.285565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.285638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.285838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.285892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.286150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.286223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.286389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.286463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.286703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.286776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.513 [2024-07-15 16:17:30.286979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.513 [2024-07-15 16:17:30.287043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.513 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.287244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.287319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.287559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.287630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.287901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.287967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.288207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.288281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.288531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.288603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.288791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.288848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.289065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.289142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.289328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.289401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.289710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.289783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.290063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.290137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.290414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.290486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.290684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.290740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.290995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.291052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.291265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.291321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.291526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.291601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.291818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.291873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.292168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.292224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.292457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.292531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.292719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.292773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.292989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.293046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.293256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.293335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.293588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.293662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.293923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.293990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.294208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.294280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.294493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.294566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.294813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.294867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.295119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.295192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.295394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.295472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.295709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.295782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.296049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.296123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.296336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.296390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.296612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.296667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.296891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.296946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.297210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.297295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.297584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.297656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.297829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.297885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.298117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.298192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.298413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.298486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.298710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.298764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.298943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.299028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.299245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.299319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.299562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.299635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.299910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.299979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.300196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.300268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.300521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.300593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.300768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.300822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.301056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.301129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.301447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.301520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.301734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.301791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.302018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.302095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.302390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.302467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.302682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.302736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.302952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.303021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.303268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.303342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.303593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.303665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.303928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.303995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.304281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.304353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.304594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.304667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.304851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.304907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.305190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.305269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.305512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.305585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.305833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.305887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.306129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.306204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.306504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.306576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.306830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.306884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.307154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.307229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.307471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.307546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.514 [2024-07-15 16:17:30.307800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.514 [2024-07-15 16:17:30.307854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.514 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.308133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.308207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.308467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.308540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.308760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.308814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.309056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.309132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.309378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.309450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.309692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.309776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.309993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.310049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.310311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.310384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.310561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.310618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.310834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.310889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.311115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.311189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.311412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.311484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.311727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.311782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.311993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.312049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.312290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.312364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.312628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.312682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.312909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.312978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.313230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.313304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.313559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.313631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.313824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.313879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.314146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.314220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.314405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.314478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.314698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.314753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.315043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.315117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.315364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.315419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.315640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.315694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.315908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.315973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.316232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.316288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.316458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.316514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.316737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.316791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.317031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.317108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.317325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.317397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.317655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.317710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.317916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.317982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.318236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.318312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.318498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.318572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.318795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.318849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.319146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.319218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.319498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.319571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.319786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.319840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.320088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.320161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.320404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.320477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.320665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.320719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.320974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.321028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.321284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.321355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.321583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.321663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.321833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.321887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.322152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.322226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.322480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.322551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.322733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.322790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.323003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.323058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.323249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.323303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.323549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.323621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.323844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.323900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.324201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.324280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.324559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.324630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.324876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.324930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.325204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.325290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.325579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.325650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.325870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.325924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.326196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.326251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.326500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.326573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.326784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.326838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.327126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.327199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.327454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.515 [2024-07-15 16:17:30.327527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.515 qpair failed and we were unable to recover it. 00:24:44.515 [2024-07-15 16:17:30.327745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.327800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.328071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.328146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.328391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.328466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.328693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.328747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.328926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.328994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.329244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.329324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.329556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.329628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.329856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.329910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.330131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.330205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.330448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.330520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.330779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.330833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.331086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.331160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.331411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.331483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.331726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.331799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.332008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.332063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.332302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.332376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.332624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.332678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.332890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.332944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.333250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.333323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.333587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.333660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.333922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.333996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.334285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.334357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.334634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.334708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.334923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.335008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.335292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.335365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.335599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.335671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.335885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.335939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.336148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.336228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.336477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.336552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.336840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.336894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.337124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.337198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.337410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.337465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.337743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.337816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.338058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.338135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.338360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.338414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.338628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.338682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.338884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.338938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.339164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.339235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.339471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.339543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.339796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.339850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.340056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.340132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.340310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.340364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.340609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.340663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.340848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.340902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.341170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.341243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.341425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.341480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.341693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.341746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.342060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.342117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.342300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.342353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.342545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.342600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.342783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.342837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.343121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.343194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.343478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.343551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.343805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.343859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.344099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.344174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.344455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.344528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.344774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.344827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.345063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.345139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.345398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.345471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.345682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.345736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.345945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.346024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.346252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.346325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.346547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.346621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.516 [2024-07-15 16:17:30.346831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.516 [2024-07-15 16:17:30.346886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.516 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.347195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.347270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.347506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.347578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.347800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.347854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.348060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.348135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.348335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.348407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.348689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.348762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.348984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.349039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.349276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.349350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.349595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.349667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.349892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.349946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.350234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.350290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.350498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.350574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.350797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.350851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.351091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.351166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.351414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.351487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.351703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.351759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.352029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.352103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.352354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.352426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.352678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.352732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.352945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.353013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.353299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.353372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.353614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.353689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.353900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.353966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.354220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.354293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.354530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.354601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.354859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.354914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.355206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.355280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.355521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.355595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.355792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.355846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.356100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.356176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.356464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.356537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.356759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.356816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.357061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.357136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.357385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.357459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.357747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.357819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.358055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.358131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.358323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.358405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.358637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.358712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.358932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.359007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.359219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.359294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.359534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.359607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.359861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.359915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.360179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.360255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.360478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.360532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.360742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.360795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.360998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.361054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.361286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.361357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.361593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.361664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.361915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.361980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.362289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.362366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.362635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.362708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.362980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.363035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.363290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.363346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.363522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.363575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.363795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.363849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.364057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.364132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.364374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.364447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.364655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.364730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.364968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.365023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.365315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.365388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.365625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.365698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.365927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.365992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.366259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.366316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.366609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.366682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.366884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.366937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.367228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.367301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.367581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.367654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.517 qpair failed and we were unable to recover it. 00:24:44.517 [2024-07-15 16:17:30.367897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.517 [2024-07-15 16:17:30.367950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.368182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.368239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.368455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.368527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.368765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.368837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.369106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.369180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.369245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161d0e0 (9): Bad file descriptor 00:24:44.518 [2024-07-15 16:17:30.369673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.369769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.370080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.370138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.370416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.370480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.370732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.370798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.371107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.371163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.371400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.371463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.371713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.371774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.372008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.372063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.372277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.372331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.372527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.372589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.372799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.372869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.373071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.373126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.373380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.373459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.373705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.373771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.374097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.374152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.374431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.374494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.374799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.374861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.375111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.375177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.375464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.375528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.375794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.375857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.376122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.376178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.376438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.376500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.376785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.376848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.377136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.377190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.377429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.377493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.377743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.377808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.378065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.378120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.378364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.378427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.378701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.378771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.379026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.379081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.379275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.379329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.379625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.379687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.380000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.380055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.380274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.380336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.380588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.380660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.380891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.380972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.381216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.381269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.381502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.381565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.381866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.381928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.382224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.382301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.382596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.382658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.382926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.383021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.383243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.383300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.383553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.383616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.383883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.383969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.384244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.384299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.384544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.384606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.384917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.385026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.385318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.385381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.385626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.385690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.385950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.386019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.386231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.386312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.386517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.386582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.386899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.386990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.387235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.387299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.387587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.518 [2024-07-15 16:17:30.387650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.518 qpair failed and we were unable to recover it. 00:24:44.518 [2024-07-15 16:17:30.387979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.388047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.388331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.388403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.388602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.388665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.388888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.388950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.389190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.389255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.389538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.389601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.389879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.389941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.390214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.390277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.390520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.390585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.390799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.390862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.391121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.391187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.391433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.391505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.391695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.391758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.392048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.392113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.392358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.392423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.392715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.392787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.393093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.393158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.393406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.393469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.393679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.393742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.394028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.394092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.394349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.394412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.394650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.394714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.395006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.395071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.395361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.395425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.395679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.395741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.395976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.396043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.396300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.396364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.396642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.396705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.396981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.397045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.397324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.397388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.397641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.397703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.397904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.397987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.398272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.398335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.398619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.398681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.398929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.399017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.399243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.399317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.399530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.399593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.399874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.399937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.400232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.400295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.400551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.400619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.400856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.400921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.401172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.401245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.401528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.401590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.401885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.401948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.402183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.402248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.402484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.402548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.402823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.402886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.403211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.403275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.403532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.403595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.403847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.403912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.404229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.404302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.404553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.404617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.404881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.404967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.405227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.405289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.405574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.405637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.405938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.406018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.406254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.406317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.406598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.406661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.406949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.407025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.407297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.407363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.407627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.407692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.408012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.408064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.408216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.408249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.408396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.408429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.408570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.408602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.408720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.408753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.408920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.408952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.409071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.519 [2024-07-15 16:17:30.409104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.519 qpair failed and we were unable to recover it. 00:24:44.519 [2024-07-15 16:17:30.409247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.409313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.409553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.409585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.409689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.409748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.409972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.410037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.410254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.410316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.410530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.410592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.410836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.410901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.411149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.411212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.411463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.411526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.411808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.411870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.412119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.412184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.412430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.412492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.412785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.412847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.413133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.413207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.413488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.413550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.413791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.413856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.414133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.414197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.414458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.414525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.414773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.414836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.415091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.415159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.415461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.415524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.415763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.415828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.416128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.416192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.416449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.416511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.416764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.416826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.417092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.417155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.417446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.417508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.417770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.417836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.418119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.418182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.418433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.418499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.418785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.418848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.419128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.419191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.419441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.419504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.419719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.419783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.420073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.420137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.420399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.420462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.420746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.420813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.421109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.421173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.421453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.421516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.421760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.421825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.422074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.422139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.422441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.422503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.422794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.422857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.423091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.423156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.423406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.423468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.423749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.423813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.424032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.424096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.424341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.424404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.424680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.424743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.424987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.425051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.425316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.425380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.425657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.425720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.425982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.426045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.426282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.426354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.426628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.426692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.426933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.427010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.427290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.427353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.427588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.427650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.427867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.427932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.428189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.428253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.428507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.428569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.428811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.428873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.429089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.429155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.429446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.429508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.429796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.429859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.430070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.430134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.430378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.430443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.430753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.430816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.520 qpair failed and we were unable to recover it. 00:24:44.520 [2024-07-15 16:17:30.431085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.520 [2024-07-15 16:17:30.431150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.431460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.431523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.431767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.431830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.432126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.432189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.432440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.432504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.432769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.432842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.433159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.433222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.433513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.433581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.433868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.433932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.434205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.434270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.434529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.434595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.434824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.434887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.435206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.435273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.435525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.435588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.435871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.435935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.436247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.436311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.436604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.436666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.436924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.437011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.437259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.437324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.437621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.437684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.437950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.438027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.438273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.438335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.438557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.438620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.438865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.438927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.439203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.439264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.439551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.439624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.439878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.439940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.440189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.440250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.440488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.440551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.440843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.440906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.441146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.441211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.441454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.441517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.441783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.441844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.442054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.442119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.442381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.442444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.442683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.442747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.442997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.443062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.443324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.443387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.443682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.443745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.444010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.444077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.444334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.444398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.444682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.444745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.444969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.445034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.445285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.445347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.445567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.445631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.445880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.445943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.446203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.446268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.446562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.446625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.446829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.446891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.447159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.447222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.447478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.447544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.447775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.447840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.448179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.448244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.448525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.448587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.448802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.448867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.449095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.449161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.449422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.449485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.449728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.449793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.450040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.450104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.450354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.450417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.450658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.521 [2024-07-15 16:17:30.450721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.521 qpair failed and we were unable to recover it. 00:24:44.521 [2024-07-15 16:17:30.451005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.451069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.451328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.451393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.451631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.451695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.451944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.452050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.452344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.452417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.452674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.452738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.452994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.453061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.453313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.453376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.453620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.453682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.453936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.454021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.454327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.454390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.454629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.454692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.454982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.455051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.455346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.455409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.455629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.455692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.455926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.456019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.456289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.456353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.456564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.456627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.456894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.456988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.457258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.457326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.457585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.457646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.457894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.457973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.458236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.458299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.458525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.458590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.458809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.458874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.459216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.459285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.459542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.459604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.459851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.459917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.460149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.460214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.460439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.460502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.460713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.460775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.461038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.461103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.461384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.461447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.461688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.461753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.462003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.462069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.462314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.462377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.462632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.462694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.462928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.463014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.463294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.463355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.463634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.463698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.463893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.463971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.464209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.464273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.464522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.464585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.464833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.464895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.465109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.465182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.465390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.465456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.465761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.465824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.466025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.466090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.466331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.466394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.466673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.466735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.467088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.467151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.467431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.467494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.467706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.467768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.468019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.468083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.468370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.468433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.468721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.468782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.469041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.469105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.469389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.469452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.469746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.469809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.470044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.470115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.470335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.470400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.470679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.470742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.471005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.471071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.471366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.471430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.471675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.471738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.472037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.472101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.472356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.472420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.472705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.472768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.522 [2024-07-15 16:17:30.473005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.522 [2024-07-15 16:17:30.473092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.522 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.473358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.473421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.473702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.473764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.474023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.474055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.474171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.474202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.474343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.474375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.474538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.474571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.474689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.474719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.474836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.474867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.475007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.475038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.475157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.475188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.475331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.475363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.475470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.475503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.475622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.475655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.475795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.475828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.475949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.475988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.476118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.476150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.476267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.476300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.476443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.476477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.476616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.476648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.476751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.476783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.476918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.476951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.477082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.477115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.477262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.477294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.477434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.477466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.477612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.477645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.477757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.477791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.477932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.477974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.478092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.478124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.478261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.478293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.478432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.478465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.478609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.478641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.478777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.478810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.478953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.479004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.479137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.479169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.479293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.479325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.479457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.479490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.479626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.479658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.479762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.479794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.479901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.479933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.480058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.480092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.480209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.480241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.480340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.480372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.480487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.480525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.480632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.480665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.480788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.480820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.481807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.481839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.481970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.482007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.482131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.482159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.482281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.482308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.482445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.482473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.482596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.482623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.482765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.482792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.482889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.482918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.483074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.483122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.483236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.483285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.483374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.483401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.483532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.483559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.483686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.483713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.483807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.483833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.483926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.483952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.484078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.484105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.523 [2024-07-15 16:17:30.484202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.523 [2024-07-15 16:17:30.484229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.523 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.484308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.484334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.484446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.484473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.484598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.484625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.484718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.484745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.484856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.484882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.484982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.485010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.485103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.485129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.485242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.485269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.485362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.485389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.485533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.485559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.485654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.485681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.485803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.485830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.485979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.486017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.486110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.486138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.486224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.486251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.486342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.486369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.486482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.486508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.486605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.486632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.486717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.486744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.486861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.486889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.486979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.487018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.487104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.487131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.487254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.487280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.487402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.487429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.487552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.487578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.487699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.487725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.487837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.487865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.487949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.487992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.488108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.488144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.488293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.488320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.488449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.488477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.488575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.488603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.488722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.488749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.488902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.488929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.489052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.489079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.489173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.489198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.489337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.489362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.489454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.489479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.489568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.489593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.489685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.489710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.489821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.489847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.489964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.489990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.490112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.490138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.490230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.490256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.490359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.490384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.490474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.490499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.490585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.490610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.490698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.490723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.490842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.490869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.490968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.491003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.491127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.491153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.491257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.491285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.491382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.491409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.491531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.491558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.491693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.491720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.491818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.491845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.491973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.492017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.492111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.492137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.492226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.492251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.492367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.492393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.492523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.524 [2024-07-15 16:17:30.492554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.524 qpair failed and we were unable to recover it. 00:24:44.524 [2024-07-15 16:17:30.492720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.492747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.492876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.492902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.493021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.493048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.493137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.493162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.493255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.493297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.493422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.493448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.493538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.493564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.493657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.493698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.493802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.493830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.493926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.493953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.494072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.494098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.494187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.494213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.494320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.494347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.494475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.494503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.494628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.494655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.494767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.494795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.494887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.494915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.495039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.495065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.495157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.495183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.495352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.495377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.495499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.495524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.495642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.495670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.495796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.495823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.525 [2024-07-15 16:17:30.495937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.525 [2024-07-15 16:17:30.496003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.525 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.496106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.496133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.496235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.496271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.496390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.496415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.496525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.496553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.496659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.496684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.496843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.496871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.497008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.497033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.497141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.497166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.497247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.497273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.497390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.497416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.497504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.497530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.497624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.497649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.497740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.497765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.497879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.497904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.498021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.498047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.498136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.498165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.498281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.498307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.498399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.498424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.498518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.498543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.498632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.498657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.498775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.809 [2024-07-15 16:17:30.498800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.809 qpair failed and we were unable to recover it. 00:24:44.809 [2024-07-15 16:17:30.498889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.498915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.499037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.499062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.499145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.499170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.499254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.499279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.499388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.499413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.499494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.499519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.499610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.499635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.499723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.499748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.499833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.499858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.499938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.499970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.500068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.500093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.500207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.500232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.500311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.500336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.500441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.500466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.500552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.500577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.500693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.500719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.500805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.500831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.500947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.500981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.501064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.501089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.501190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.501229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.501312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.501338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.501438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.501464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.501554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.501580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.501669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.501694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.501781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.501806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.501897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.501923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.502024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.502050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.502136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.502161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.502276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.502301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.502413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.502438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.502550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.502575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.502663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.502688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.502792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.502817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.502932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.502964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.503081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.503111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.503202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.503227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.503315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.503340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.503428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.503456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.503553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.503579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.503657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.810 [2024-07-15 16:17:30.503682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.810 qpair failed and we were unable to recover it. 00:24:44.810 [2024-07-15 16:17:30.503793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.503819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.503907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.503932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.504038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.504065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.504158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.504183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.504305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.504330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.504446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.504471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.504555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.504581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.504724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.504749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.504867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.504893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.505012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.505039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.505156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.505181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.505279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.505303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.505417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.505441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.505529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.505553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.505666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.505692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.505803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.505827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.505933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.505962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.506077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.506101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.506210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.506235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.506336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.506360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.506466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.506491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.506627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.506666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.506805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.506844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.506966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.506994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.507087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.507114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.507208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.507234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.507320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.507346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.507489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.507516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.507632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.507656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.507771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.507795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.507907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.507931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.508082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.508106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.508219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.508244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.508384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.508408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.508519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.508548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.508667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.508692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.508810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.508834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.508920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.508949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.509082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.509108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.509199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.509224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.811 qpair failed and we were unable to recover it. 00:24:44.811 [2024-07-15 16:17:30.509301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.811 [2024-07-15 16:17:30.509330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.509449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.509474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.509569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.509594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.509692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.509718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.509836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.509861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.509997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.510024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.510107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.510132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.510268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.510293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.510388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.510413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.510492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.510517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.510653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.510678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.510772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.510810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.510961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.510989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.511080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.511105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.511192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.511217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.511328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.511354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.511463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.511488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.511572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.511598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.511735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.511760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.511870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.511895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.512014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.512042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.512133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.512159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.512251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.512277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.512388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.512413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.512500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.512526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.512644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.512669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.512755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.512780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.512896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.512922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.513021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.513048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.513134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.513160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.513239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.513265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.513377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.513402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.513548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.513574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.513691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.513716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.513802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.513832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.513961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.513987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.514082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.514108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.514222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.514247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.514340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.514368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.514472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.514497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.514585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.812 [2024-07-15 16:17:30.514610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.812 qpair failed and we were unable to recover it. 00:24:44.812 [2024-07-15 16:17:30.514702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.514728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.514848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.514874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.514985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.515011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.515107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.515134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.515250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.515276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.515371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.515398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.515553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.515580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.515686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.515722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.515845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.515872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.516005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.516035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.516179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.516222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.516324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.516351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.516484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.516509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.516624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.516650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.516730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.516755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.516880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.516906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.517010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.517050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.517169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.517196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.517335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.517362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.517452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.517478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.517610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.517648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.517776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.517815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.517945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.517982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.518095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.518121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.518199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.518241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.518348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.518373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.518584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.518636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.518757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.518783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.518922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.518947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.519055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.519080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.519191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.519216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.519425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.519452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.519543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.519570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.813 qpair failed and we were unable to recover it. 00:24:44.813 [2024-07-15 16:17:30.519666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.813 [2024-07-15 16:17:30.519698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.519818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.519846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.519968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.520026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.520151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.520178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.520269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.520295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.520437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.520463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.520620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.520654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.520797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.520849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.521004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.521031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.521148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.521174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.521257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.521282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.521376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.521417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.521523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.521547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.521672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.521698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.521783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.521824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.521939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.521974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.522063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.522090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.522190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.522217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.522338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.522364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.522492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.522519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.522615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.522646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.522790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.522819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.522933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.522981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.523128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.523155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.523274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.523301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.523417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.523443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.523577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.523603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.523723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.523749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.523833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.523858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.523949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.523984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.524098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.524123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.524238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.524263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.524378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.524403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.524510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.524551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.524673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.524700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.524791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.524816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.524927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.524952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.525081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.525107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.525212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.525240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.525334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.525361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.814 [2024-07-15 16:17:30.525462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.814 [2024-07-15 16:17:30.525496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.814 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.525646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.525673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.525762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.525789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.525898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.525939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.526066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.526095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.526211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.526262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.526402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.526452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.526595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.526644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.526759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.526784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.526873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.526899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.527030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.527073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.527194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.527220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.527360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.527407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.527555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.527606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.527737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.527764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.527879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.527905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.528035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.528063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.528183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.528210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.528390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.528433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.528526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.528552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.528672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.528698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.528784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.528810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.528902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.528927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.529048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.529075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.529157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.529182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.529309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.529335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.529417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.529442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.529527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.529553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.529668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.529694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.529780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.529806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.529900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.529926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.530030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.530056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.530172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.530197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.530284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.530309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.530392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.530417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.530525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.530550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.530659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.530685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.530773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.530799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.530886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.530911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.530998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.531022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.531109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.815 [2024-07-15 16:17:30.531139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.815 qpair failed and we were unable to recover it. 00:24:44.815 [2024-07-15 16:17:30.531257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.531282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.531410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.531436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.531574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.531600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.531712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.531737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.531822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.531847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.531965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.531991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.532096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.532140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.532255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.532282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.532418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.532444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.532547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.532572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.532663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.532688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.532802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.532827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.532914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.532940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.533049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.533075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.533157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.533183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.533327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.533352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.533440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.533465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.533573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.533599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.533725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.533750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.533871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.533897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.534003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.534043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.534137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.534164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.534278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.534307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.534396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.534423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.534504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.534530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.534672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.534698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.534840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.534867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.534970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.535014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.535147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.535174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.535259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.535287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.535416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.535463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.535652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.535704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.535911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.535938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.536077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.536103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.536206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.536234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.536363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.536392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.536559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.536593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.536810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.536850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.536986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.537037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.537184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.537212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.816 [2024-07-15 16:17:30.537381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.816 [2024-07-15 16:17:30.537422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.816 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.537579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.537618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.537788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.537838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.537947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.537992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.538087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.538131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.538315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.538366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.538468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.538496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.538621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.538647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.538790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.538817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.538941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.538973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.539088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.539113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.539215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.539242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.539367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.539409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.539550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.539600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.539718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.539743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.539833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.539861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.539981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.540011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.540164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.540190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.540316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.540365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.540527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.540582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.540809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.540835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.540924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.540951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.541099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.541143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.541250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.541303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.541400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.541449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.541531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.541555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.541664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.541694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.541807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.541833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.541952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.542009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.542110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.542148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.542271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.542298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.542421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.542447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.542534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.542559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.542679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.542708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.542823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.542851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.542966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.542993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.543079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.543106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.543208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.543235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.543370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.543395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.543513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.543539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.543658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-07-15 16:17:30.543685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.817 qpair failed and we were unable to recover it. 00:24:44.817 [2024-07-15 16:17:30.543775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.543800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.543887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.543913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.544005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.544034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.544127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.544157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.544300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.544326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.544418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.544445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.544549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.544575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.544655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.544683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.544764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.544792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.544914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.544940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.545055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.545097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.545200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.545240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.545394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.545439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.545560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.545609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.545704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.545731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.545874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.545900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.546022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.546048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.546169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.546207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.546372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.546421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.546524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.546551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.546687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.546714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.546832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.546859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.546962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.546990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.547096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.547121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.547238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.547262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.547366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.547398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.547518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.547546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.547646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.547673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.547789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.547816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.547986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.548012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.548092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.548117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.548213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.548238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.548335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.548361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.548469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.548495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.548579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.548605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.818 [2024-07-15 16:17:30.548686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.818 [2024-07-15 16:17:30.548713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.818 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.548854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.548881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.549036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.549075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.549168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.549194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.549342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.549369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.549492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.549518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.549641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.549684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.549781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.549809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.549924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.549950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.550105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.550130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.550277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.550313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.550483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.550533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.550649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.550691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.550848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.550886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.551058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.551084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.551166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.551192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.551325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.551360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.551567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.551610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.551780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.551815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.551927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.551974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.552100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.552126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.552211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.552254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.552400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.552433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.552570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.552618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.552755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.552791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.553007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.553046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.553152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.553191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.553388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.553414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.553502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.553542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.553685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.553730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.553863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.553888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.554021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.554047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.554144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.554169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.554286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.554310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.554416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.554441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.554523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.554547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.554695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.554723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.554862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.554903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.555029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.555057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.555192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.555220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.555360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.555386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.819 qpair failed and we were unable to recover it. 00:24:44.819 [2024-07-15 16:17:30.555506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.819 [2024-07-15 16:17:30.555533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.555639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.555667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.555773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.555815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.555927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.555962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.556056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.556083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.556200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.556225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.556315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.556342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.556450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.556475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.556679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.556712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.556856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.556888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.557036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.557064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.557149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.557174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.557261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.557286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.557456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.557491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.557644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.557677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.557783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.557832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.557952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.557984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.558077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.558105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.558197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.558241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.558347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.558372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.558540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.558587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.558723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.558750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.558863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.558888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.558980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.559005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.559141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.559166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.559256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.559282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.559428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.559475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.559566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.559593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.559752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.559807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.559930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.559967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.560097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.560136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.560323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.560362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.560599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.560634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.560765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.560791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.560925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.560953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.561121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.561147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.561283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.561320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.561502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.561538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.561668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.561717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.561906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.820 [2024-07-15 16:17:30.561940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.820 qpair failed and we were unable to recover it. 00:24:44.820 [2024-07-15 16:17:30.562093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.562119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.562207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.562233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.562349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.562377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.562535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.562572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.562704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.562730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.562924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.562968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.563087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.563113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.563210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.563237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.563325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.563350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.563505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.563538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.563729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.563781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.563917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.563942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.564050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.564076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.564189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.564214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.564343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.564385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.564589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.564623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.564828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.564892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.565080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.565106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.565222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.565249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.565359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.565408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.565561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.565595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.565776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.565814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.565965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.565991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.566078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.566104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.566210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.566256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.566418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.566454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.566624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.566650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.566846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.566886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.567013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.567040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.567126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.567152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.567256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.567289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.567448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.567475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.567624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.567672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.567760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.567789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.567912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.567937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.568028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.568054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.568163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.568188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.568280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.568306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.568398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.568424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.568530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.568555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.568676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.821 [2024-07-15 16:17:30.568703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.821 qpair failed and we were unable to recover it. 00:24:44.821 [2024-07-15 16:17:30.568793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.568820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.568950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.568983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.569066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.569091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.569205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.569231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.569321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.569363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.569483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.569510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.569622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.569649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.569730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.569757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.569887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.569912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.570002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.570029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.570150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.570178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.570305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.570334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.570428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.570457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.570578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.570606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.570781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.570837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.570964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.570992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.571097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.571151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.571328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.571376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.571500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.571547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.571696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.571723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.571820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.571849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.571942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.571980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.572110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.572147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.572277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.572314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.572480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.572513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.572613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.822 [2024-07-15 16:17:30.572646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.822 qpair failed and we were unable to recover it. 00:24:44.822 [2024-07-15 16:17:30.572840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.572866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.572996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.573023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.573108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.573134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.573220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.573250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.573431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.573465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.573577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.573611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.573756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.573789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.573967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.574015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.574125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.574152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.574273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.574300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.574414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.574439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.574543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.574591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.574705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.574732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.574862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.574887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.574994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.575032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.575130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.575156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.575287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.575330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.575499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.575541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.575719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.575758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.575929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.575960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.576072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.576099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.576226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.576279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.576444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.576481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.576656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.576692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.576844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.576869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.576950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.576984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.577069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.577095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.577189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.577216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.577304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.577331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.577502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.577539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.577756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.577790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.577910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.577937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.578078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.578105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.578224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.578251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.578356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.578382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.578562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.578596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.578723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.578775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.578980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.579014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.579118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.579144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.579283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.579317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.823 qpair failed and we were unable to recover it. 00:24:44.823 [2024-07-15 16:17:30.579441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.823 [2024-07-15 16:17:30.579478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.579589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.579626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.579828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.579856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.579968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.580016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.580110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.580136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.580240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.580267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.580387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.580425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.580557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.580599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.580757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.580794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.580932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.580964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.581072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.581098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.581245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.581272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.581456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.581490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.581658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.581693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.581854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.581891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.582045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.582071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.582184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.582210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.582353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.582400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.582558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.582584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.582826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.582863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.583007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.583034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.583150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.583177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.583312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.583340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.583441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.583489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.583642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.583691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.583867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.583904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.584058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.584084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.584221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.584263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.584377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.584423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.584581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.584625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.584766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.584802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.584946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.584993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.585079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.585106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.585192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.585218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.585353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.585380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.585517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.585544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.824 [2024-07-15 16:17:30.585662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.824 [2024-07-15 16:17:30.585690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.824 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.585868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.585905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.586061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.586087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.586179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.586205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.586289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.586316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.586460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.586498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.586622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.586658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.586817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.586859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.587000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.587038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.587160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.587188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.587302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.587386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.587541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.587591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.587717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.587764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.587871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.587897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.588004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.588031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.588137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.588162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.588273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.588298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.588384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.588428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.588564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.588592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.588677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.588705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.588798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.588827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.588967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.588994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.589079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.589106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.589214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.589240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.589375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.589419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.589549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.589577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.589772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.589799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.589961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.590005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.590102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.590128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.590242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.590267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.590414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.590448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.590619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.590655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.590808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.590844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.591012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.591038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.591180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.591206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.591292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.591318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.591485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.591524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.591663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.591702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.591886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.591914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.592057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.592083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.825 qpair failed and we were unable to recover it. 00:24:44.825 [2024-07-15 16:17:30.592202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.825 [2024-07-15 16:17:30.592228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.592350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.592393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.592539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.592593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.592833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.592869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.593001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.593044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.593175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.593201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.593385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.593422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.593590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.593632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.593790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.593850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.594001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.594028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.594163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.594189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.594330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.594358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.594527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.594576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.594742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.594779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.595016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.595042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.595178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.595204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.595364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.595391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.595540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.595579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.595795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.595832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.596008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.596035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.596150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.596177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.596324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.596350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.596506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.596543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.596698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.596735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.596877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.596903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.597021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.597049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.597142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.597168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.597307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.597333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.597417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.597443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.597568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.597605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.597819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.597856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.598024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.598051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.598161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.598187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.598287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.598330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.598552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.598589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.598741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.598779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.598929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.598960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.599100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.599125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.599231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.599258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.599432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.826 [2024-07-15 16:17:30.599469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.826 qpair failed and we were unable to recover it. 00:24:44.826 [2024-07-15 16:17:30.599660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.599696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.599814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.599855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.599977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.600005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.600119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.600145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.600245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.600274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.600439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.600465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.600553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.600579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.600705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.600748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.600923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.600949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.601063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.601089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.601206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.601232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.601378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.601417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.601576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.601613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.601790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.601854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.602050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.602116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.602339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.602377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.602600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.602664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.602819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.602856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.603012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.603050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.603200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.603237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.603386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.603422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.603578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.603616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.603769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.603806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.603942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.603989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.604129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.604165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.604313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.604350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.604471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.604510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.604645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.604682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.604801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.604839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.604993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.605031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.605176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.605212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.827 qpair failed and we were unable to recover it. 00:24:44.827 [2024-07-15 16:17:30.605320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.827 [2024-07-15 16:17:30.605358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.605520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.605559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.605696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.605733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.605905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.605941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.606080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.606119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.606272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.606309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.606467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.606503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.606625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.606662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.606822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.606859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.607019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.607056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.607231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.607268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.607427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.607466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.607649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.607685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.607872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.607931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.608132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.608169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.608322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.608359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.608501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.608544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.608697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.608734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.608884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.608920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.609083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.609121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.609251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.609289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.609446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.609485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.609651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.609689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.609851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.609888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.610037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.610075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.610227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.610269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.610397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.610435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.610591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.610627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.610780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.610818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.610943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.610990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.611179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.611216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.611378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.611415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.611600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.611637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.611758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.611794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.611927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.828 [2024-07-15 16:17:30.611981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.828 qpair failed and we were unable to recover it. 00:24:44.828 [2024-07-15 16:17:30.612140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.612177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.612362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.612398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.612564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.612601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.612720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.612795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.612987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.613026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.613152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.613192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.613353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.613391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.613553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.613592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.613769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.613809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.613975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.614016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.614214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.614251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.614437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.614473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.614615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.614651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.614800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.614837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.615026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.615063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.615217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.615254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.615409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.615445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.615600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.615637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.615790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.615826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.615945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.615991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.616202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.616241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.616351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.616395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.616547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.616586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.616744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.616784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.616936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.616982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.617113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.617152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.617263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.617302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.617432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.617471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.617597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.617638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.617830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.617869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.618057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.618097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.618257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.618296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.618426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.618465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.618627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.618667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.618823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.618863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.619023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.619062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.829 [2024-07-15 16:17:30.619191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.829 [2024-07-15 16:17:30.619230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.829 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.619385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.619425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.619585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.619626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.619786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.619826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.619998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.620039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.620213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.620252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.620415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.620454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.620604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.620642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.620790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.620828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.621016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.621057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.621224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.621263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.621447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.621486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.621619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.621660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.621847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.621886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.622027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.622074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.622207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.622245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.622410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.622449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.622640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.622679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.622837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.622876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.623039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.623078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.623236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.623275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.623436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.623475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.623659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.623700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.623866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.623907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.624353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.624430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.624610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.624658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.624839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.624897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.625095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.625135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.625283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.625322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.625479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.625517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.625668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.625706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.625864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.625903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.626137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.626176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.626300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.626338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.626500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.626539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.626696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.626737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.626898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.626937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.627157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.627213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.627420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.627475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.830 [2024-07-15 16:17:30.627693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.830 [2024-07-15 16:17:30.627749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.830 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.628011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.628050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.628234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.628273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.628430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.628471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.628632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.628672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.628858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.628898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.629091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.629134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.629273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.629315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.629429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.629469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.629665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.629706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.629873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.629919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.630107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.630149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.630388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.630429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.630623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.630683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.630900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.630941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.631106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.631148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.631291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.631331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.631505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.631546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.631684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.631750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.631951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.632022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.632216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.632256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.632419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.632460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.632620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.632660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.632820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.632861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.633039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.633081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.633249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.633307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.633515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.633557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.633767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.633809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.633996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.634039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.634222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.634266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.831 qpair failed and we were unable to recover it. 00:24:44.831 [2024-07-15 16:17:30.634429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.831 [2024-07-15 16:17:30.634470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.634631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.634671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.634865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.634906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.635088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.635131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.635297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.635337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.635570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.635611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.635751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.635792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.635932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.635982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.636157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.636198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.636390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.636430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.636632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.636675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.636837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.636880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.637099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.637143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.637320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.637362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.637536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.637580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.637780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.637823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.637972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.638019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.638190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.638232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.638399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.638442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.638617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.638660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.638836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.638878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.639050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.639095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.639269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.639312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.639517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.639565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.639774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.639817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.639980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.640024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.640175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.640219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.640370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.640414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.640594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.640637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.640813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.640856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.641028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.641073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.641243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.641286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.641458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.641501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.641644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.641686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.641860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.641902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.642057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.642100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.642268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.642310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.642524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.642567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.642773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.642816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.642973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.643017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.832 [2024-07-15 16:17:30.643219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.832 [2024-07-15 16:17:30.643263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.832 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.643472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.643515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.643686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.643728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.643930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.643981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.644121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.644165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.644337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.644381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.644548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.644591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.644734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.644778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.644949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.645000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.645180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.645228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.645415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.645458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.645639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.645682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.645820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.645861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.646024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.646066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.646247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.646289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.646466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.646508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.646651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.646693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.646856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.646898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.647075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.647118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.647284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.647325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.647527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.647569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.647711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.647753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.647911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.647952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.648149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.648197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.648339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.648381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.648544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.648586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.648748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.648789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.648923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.648979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.649206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.649250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.649456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.649500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.649675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.649718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.649884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.649926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.650141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.650183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.650398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.650442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.650621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.650665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.833 qpair failed and we were unable to recover it. 00:24:44.833 [2024-07-15 16:17:30.650837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.833 [2024-07-15 16:17:30.650881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.651036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.651082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.651227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.651272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.651439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.651483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.651638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.651683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.651874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.651918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.652113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.652159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.652339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.652384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.652536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.652580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.652771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.652815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.652973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.653018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.653195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.653239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.653381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.653426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.653602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.653648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.653871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.653916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.654092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.654139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.654357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.654402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.654568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.654612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.654834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.654879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.655100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.655146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.655355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.655400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.655623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.655667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.655844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.655888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.656058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.656104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.656282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.656326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.656456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.656500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.656680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.656724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.656906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.656949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.657145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.657191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.657410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.657455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.657629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.657673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.657855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.657899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.658096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.658142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.658362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.658407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.658579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.658622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.658792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.658836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.659010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.659057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.659232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.659276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.659460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.659504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.659671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.659715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.659890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.834 [2024-07-15 16:17:30.659934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.834 qpair failed and we were unable to recover it. 00:24:44.834 [2024-07-15 16:17:30.660134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.660179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.660392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.660436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.660649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.660694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.660907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.660951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.661122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.661167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.661333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.661377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.661560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.661605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.661758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.661802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.661991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.662036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.662225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.662269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.662477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.662521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.662679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.662724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.662901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.662947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.663116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.663161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.663349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.663393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.663570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.663622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.663796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.663840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.664056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.664102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.664268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.664312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.664447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.664494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.664681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.664726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.664913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.664967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.665106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.665152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.665340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.665384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.665541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.665587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.665775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.665820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.665995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.666041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.666219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.666265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.666447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.666491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.666645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.666692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.666870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.666914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.667108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.667153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.667314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.667358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.667553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.667597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.667810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.667854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.668006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.668052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.835 [2024-07-15 16:17:30.668242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.835 [2024-07-15 16:17:30.668286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.835 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.668456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.668501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.668682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.668726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.668880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.668924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.669148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.669192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.669345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.669389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.669569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.669620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.669843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.669888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.670058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.670103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.670290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.670334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.670522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.670567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.670779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.670823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.670975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.671021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.671210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.671254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.671438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.671482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.671654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.671698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.671905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.671948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.672164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.672209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.672433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.672477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.672654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.672698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.672893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.672937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.673108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.673154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.673307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.673353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.673562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.673607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.673804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.673847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.674066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.674113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.674293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.674337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.674549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.674593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.674770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.674815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.674999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.675045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.675221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.675266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.675478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.675522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.675711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.675756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.675920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.675973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.676145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.676189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.836 [2024-07-15 16:17:30.676393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.836 [2024-07-15 16:17:30.676437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.836 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.676628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.676673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.676844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.676888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.677080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.677125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.677347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.677391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.677575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.677618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.677831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.677875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.678060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.678106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.678290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.678335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.678509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.678554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.678767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.678810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.678992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.679037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.679191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.679238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.679420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.679464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.679674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.679719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.679886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.679930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.680080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.680127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.680282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.680327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.680507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.680551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.680702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.680748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.680902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.680947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.681141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.681187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.681327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.681371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.681576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.681621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.681832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.681877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.682071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.682116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.682305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.682349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.682539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.682583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.682772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.682816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.682991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.683037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.683182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.683227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.683398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.683442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.683598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.683642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.683840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.683884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.684074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.684120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.684308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.684352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.684534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.684580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.684767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.684812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.684989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.685035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.685211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.685262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.685437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.685481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.837 [2024-07-15 16:17:30.685694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.837 [2024-07-15 16:17:30.685738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.837 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.685886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.685930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.686103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.686147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.686319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.686364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.686546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.686590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.686809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.686871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.687117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.687169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.687365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.687415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.687605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.687655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.687871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.687934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.688170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.688220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.688435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.688498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.688743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.688805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.689027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.689079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.689258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.689308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.689537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.689587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.689786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.689836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.690036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.690089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.690281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.690331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.690538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.690588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.690804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.690855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.691053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.691104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.691303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.691354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.691563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.691614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.691826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.691876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.692084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.692143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.692320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.692369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.692526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.692576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.692733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.692783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.692985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.693036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.693193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.693244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.693437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.693487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.693696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.693745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.693941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.694002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.694211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.694261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.694426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.694475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.694643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.694693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.694929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.694989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.695223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.695274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.695489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.695539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.695706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.695757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.838 [2024-07-15 16:17:30.695976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.838 [2024-07-15 16:17:30.696027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.838 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.696232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.696282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.696455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.696507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.696718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.696768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.697005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.697057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.697230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.697281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.697478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.697528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.697763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.697813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.698005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.698058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.698232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.698282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.698494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.698544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.698776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.698834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.699067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.699118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.699297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.699347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.699550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.699599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.699776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.699826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.700062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.700113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.700324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.700378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.700554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.700608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.700892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.700987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.701215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.701269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.701516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.701569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.701808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.701870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.702158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.702212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.702379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.702432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.702651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.702705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.702973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.703028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.703254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.703308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.703522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.703575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.703789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.703842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.704091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.704147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.704369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.704422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.704672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.704725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.704977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.705050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.705260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.705313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.705534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.705587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.705813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.705866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.706059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.706114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.706317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.706371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.706583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.706636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.706878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.706932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.707211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.839 [2024-07-15 16:17:30.707266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.839 qpair failed and we were unable to recover it. 00:24:44.839 [2024-07-15 16:17:30.707434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.707487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.707676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.707729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.707927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.707998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.708215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.708269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.708483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.708536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.708811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.708868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.709111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.709173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.709377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.709435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.709624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.709681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.709922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.710003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.710245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.710301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.710552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.710606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.710861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.710915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.711197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.711253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.711465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.711519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.711739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.711792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.711989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.712046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.712295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.712349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.712552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.712607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.712810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.712864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.713084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.713141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.713389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.713443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.713693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.713747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.713987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.714059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.714283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.714339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.714563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.714617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.714876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.714930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.715162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.715220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.715450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.715508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.715760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.715818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.716079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.716139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.716353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.716410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.716591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.716649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.716882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.716940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.717232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.717289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.840 [2024-07-15 16:17:30.717559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.840 [2024-07-15 16:17:30.717617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.840 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.717840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.717898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.718143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.718212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.718412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.718470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.718731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.718789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.719004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.719065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.719328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.719387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.719619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.719676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.719912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.719983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.720224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.720282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.720493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.720550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.720764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.720822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.721043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.721103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.721336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.721394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.721624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.721681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.721900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.721975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.722208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.722267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.722547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.722605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.722792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.722877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.723155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.723220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.723462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.723533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.723861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.723925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.724190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.724257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.724532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.724590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.724798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.724864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.725111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.725173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.725381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.725440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.725651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.725709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.725941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.726017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.726217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.726291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.726591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.726652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.726925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.727003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.727276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.727335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.727551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.727610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.727887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.727948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.728209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.728269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.728511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.728569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.728791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.728849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.729127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.729188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.729469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.729531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.729764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.729827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.841 [2024-07-15 16:17:30.730040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.841 [2024-07-15 16:17:30.730100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.841 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.730334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.730391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.730635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.730693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.730947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.731022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.731288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.731347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.731578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.731636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.731894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.731953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.732204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.732267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.732532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.732593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.732837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.732896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.733154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.733216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.733462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.733520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.733722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.733786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.734054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.734117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.734312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.734372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.734638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.734696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.734939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.735013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.735257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.735322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.735572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.735633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.735866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.735924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.736134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.736193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.736458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.736516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.736792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.736853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.737122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.737183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.737425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.737485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.737783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.737846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.738095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.738161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.738392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.738469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.738722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.738787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.739084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.739151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.739435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.739500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.739736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.739805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.740075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.740143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.740397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.740459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.740715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.740778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.741065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.741130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.741389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.741455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.741668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.741734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.742030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.742095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.742340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.742402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.742647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.742709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.842 [2024-07-15 16:17:30.743036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 16:17:30.743103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.842 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.743380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.743443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.743710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.743773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.744034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.744098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.744357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.744423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.744688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.744754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.745008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.745073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.745327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.745390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.745644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.745713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.745988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.746068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.746312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.746375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.746623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.746688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.746989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.747056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.747340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.747406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.747623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.747688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.747986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.748061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.748274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.748337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.748583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.748648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.748909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.748991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.749267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.749331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.749544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.749607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.749903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.749985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.750208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.750278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.750587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.750653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.750899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.750991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.751284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.751348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.751550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.751615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.751901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.751991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.752280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.752344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.752556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.752621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.752898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.752978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.753227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.753305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.753587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.753656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.753905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.753987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.754249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.754311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.754565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.754629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.754904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.755017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.755317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.755381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.755631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.755694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.755941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.756024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.756326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.756392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.843 [2024-07-15 16:17:30.756657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 16:17:30.756722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.843 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.757006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.757080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.757330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.757393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.757675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.757752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.758059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.758124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.758378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.758441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.758696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.758759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.759012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.759076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.759323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.759390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.759698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.759763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.760043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.760108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.760377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.760440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.760683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.760762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.761030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.761097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.761376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.761439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.761696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.761759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.762009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.762073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.762320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.762387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.762605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.762670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.762930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.763019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.763301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.763364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.763626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.763699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.764015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.764083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.764341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.764405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.764617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.764680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.764925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.765009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.765236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.765317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.765577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.765642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.765879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.765952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.766237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.766301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.766519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.766582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.766797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.766862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.767167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.767233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.767484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.767547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.767794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.767856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.768164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.768230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.768541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.768608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.768855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.768920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.769206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.769270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.769532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.769594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.769811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 16:17:30.769877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.844 qpair failed and we were unable to recover it. 00:24:44.844 [2024-07-15 16:17:30.770153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.770220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.770520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.770583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.770866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.770928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.771240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.771322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.771615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.771680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.771931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.772030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.772293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.772357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.772596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.772660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.772981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.773059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.773360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.773424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.773616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.773685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.773939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.774021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.774293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.774359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.774611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.774676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.774988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.775054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.775356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.775419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.775705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.775776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.776041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.776108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.776322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.776388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.776647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.776710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.777006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.777071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.777331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.777399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.777643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.777707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.778020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.778086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.778371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.778434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.778640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.778709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.778940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.779032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.779341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.779404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.779650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.779722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.780008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.780073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.780364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.780440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.780740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.780805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.781105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.781172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.781453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.781516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.781735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.781812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.845 [2024-07-15 16:17:30.782083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.845 [2024-07-15 16:17:30.782150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.845 qpair failed and we were unable to recover it. 00:24:44.846 [2024-07-15 16:17:30.782401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.846 [2024-07-15 16:17:30.782463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.846 qpair failed and we were unable to recover it. 00:24:44.846 [2024-07-15 16:17:30.782745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.846 [2024-07-15 16:17:30.782807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.846 qpair failed and we were unable to recover it. 00:24:44.846 [2024-07-15 16:17:30.783047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.846 [2024-07-15 16:17:30.783111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.846 qpair failed and we were unable to recover it. 00:24:44.846 [2024-07-15 16:17:30.783349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.846 [2024-07-15 16:17:30.783425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.846 qpair failed and we were unable to recover it. 00:24:44.846 [2024-07-15 16:17:30.783710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.846 [2024-07-15 16:17:30.783773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.846 qpair failed and we were unable to recover it. 00:24:44.846 [2024-07-15 16:17:30.784058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.846 [2024-07-15 16:17:30.784126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.846 qpair failed and we were unable to recover it. 00:24:44.846 [2024-07-15 16:17:30.784352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.846 [2024-07-15 16:17:30.784415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.846 qpair failed and we were unable to recover it. 00:24:44.846 [2024-07-15 16:17:30.784686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.846 [2024-07-15 16:17:30.784749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.846 qpair failed and we were unable to recover it. 00:24:44.846 [2024-07-15 16:17:30.785000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.846 [2024-07-15 16:17:30.785066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.846 qpair failed and we were unable to recover it. 00:24:44.846 [2024-07-15 16:17:30.785334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.846 [2024-07-15 16:17:30.785399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.846 qpair failed and we were unable to recover it. 00:24:44.846 [2024-07-15 16:17:30.785670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.846 [2024-07-15 16:17:30.785733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.846 qpair failed and we were unable to recover it. 00:24:44.846 [2024-07-15 16:17:30.786012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.846 [2024-07-15 16:17:30.786079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.846 qpair failed and we were unable to recover it. 00:24:44.846 [2024-07-15 16:17:30.786328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.846 [2024-07-15 16:17:30.786393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.846 qpair failed and we were unable to recover it. 00:24:44.846 [2024-07-15 16:17:30.786674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.846 [2024-07-15 16:17:30.786737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.846 qpair failed and we were unable to recover it. 00:24:44.846 [2024-07-15 16:17:30.786988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.846 [2024-07-15 16:17:30.787053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.846 qpair failed and we were unable to recover it. 00:24:44.846 [2024-07-15 16:17:30.787260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.846 [2024-07-15 16:17:30.787322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:44.846 qpair failed and we were unable to recover it. 00:24:44.846 [2024-07-15 16:17:30.787654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.846 [2024-07-15 16:17:30.787720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.788014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.788083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.788286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.788352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.788644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.788717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.789022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.789088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.789349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.789414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.789671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.789735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.789988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.790052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.790342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.790409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.790675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.790740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.790998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.791063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.791336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.791401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.791708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.791786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.792008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.792075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.792282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.792347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.792597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.792660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.792873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.792939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.793265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.793332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.793580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.793642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.793889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.794002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.794282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.794348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.794608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.794670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.794974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.118 [2024-07-15 16:17:30.795039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.118 qpair failed and we were unable to recover it. 00:24:45.118 [2024-07-15 16:17:30.795285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.795352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.795640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.795717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.796007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.796072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.796287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.796350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.796646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.796709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.796948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.797027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.797290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.797355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.797614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.797686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.798000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.798067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.798360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.798430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.798658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.798724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.798984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.799055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.799349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.799413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.799674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.799755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.800034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.800100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.800352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.800415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.800658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.800721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.800982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.801045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.801340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.801405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.801653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.801719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.802002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.802066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.802304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.802366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.802609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.802675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.802984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.803062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.803343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.803406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.803654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.803717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.803918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.804002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.804267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.804336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.804599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.804663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.804910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.804990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.805279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.805342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.805563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.805627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.805919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.806014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.806265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.806329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.806545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.806608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.806835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.806898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.807190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.807273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.807526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.807590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.807888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.807953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.808245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.808308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.808568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.808631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.808927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.809023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.809245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.809309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.119 [2024-07-15 16:17:30.809554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.119 [2024-07-15 16:17:30.809616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.119 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.809871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.809933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.810182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.810248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.810520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.810588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.810890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.810953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.811246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.811310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.811596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.811659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.812002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.812069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.812312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.812375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.812635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.812697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.812938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.813019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.813248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.813320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.813617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.813682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.813926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.814008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.814258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.814320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.814542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.814604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.814869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.814937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.815253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.815318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.815584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.815647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.815945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.816031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.816332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.816396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.816693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.816758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.817012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.817078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.817321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.817386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.817644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.817726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.817973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.818040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.818286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.818349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.818644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.818708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.818921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.818999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.819245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.819310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.819567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.819632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.819880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.819942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.820233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.820306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.820599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.820676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.820996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.821063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.821275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.821340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.821603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.821667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.821913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.821996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.822297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.822363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.822613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.822678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.822983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.823048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.823328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.823390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.823652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.823726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.823996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.824063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.120 [2024-07-15 16:17:30.824309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.120 [2024-07-15 16:17:30.824372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.120 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.824650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.824713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.825012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.825079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.825284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.825349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.825584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.825649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.825946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.826025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.826302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.826365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.826651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.826719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.827002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.827080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.827337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.827401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.827622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.827685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.827877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.827939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.828268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.828342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.828600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.828666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.828866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.828929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.829187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.829260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.829542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.829604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.829852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.829917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.830216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.830282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.830529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.830592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.830797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.830859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.831118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.831183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.831405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.831471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.831750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.831814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.832098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.832181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.832450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.832516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.832782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.832858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.833125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.833192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.833484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.833547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.833832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.833897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.834111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.834176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.834460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.834523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.834733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.834798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.835114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.835181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.835386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.835451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.835679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.835743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.836025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.836089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.836341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.836404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.836661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.836726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.837006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.837072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.837288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.837352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.837591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.837654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.837897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.838000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.838268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.121 [2024-07-15 16:17:30.838334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.121 qpair failed and we were unable to recover it. 00:24:45.121 [2024-07-15 16:17:30.838630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.838692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.838992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.839057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.839281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.839348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.839639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.839704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.839969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.840034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.840291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.840355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.840636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.840698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.840940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.841056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.841313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.841377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.841629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.841694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.841990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.842055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.842297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.842363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.842639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.842705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.842950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.843028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.843237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.843300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.843580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.843643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.843925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.844003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.844254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.844320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.844576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.844638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.844887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.844950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.845227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.845290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.845532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.845595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.845847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.845910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.846176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.846240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.846521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.846584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.846785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.846847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.847104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.847170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.847389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.847452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.847702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.847764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.847999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.848065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.848346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.848410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.848695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.848758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.849041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.849105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.849399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.849463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.849748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.849811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.850093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.850158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.122 [2024-07-15 16:17:30.850442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.122 [2024-07-15 16:17:30.850505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.122 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.850766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.850829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.851030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.851095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.851316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.851387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.851600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.851663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.851928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.852011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.852250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.852314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.852494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.852557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.852814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.852877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.853134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.853198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.853438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.853503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.853794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.853857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.854169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.854234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.854477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.854541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.854752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.854814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.855082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.855148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.855402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.855465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.855761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.855823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.856119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.856183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.856484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.856547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.856834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.856896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.857164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.857228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.857510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.857572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.857816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.857879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.858184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.858248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.858494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.858558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.858811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.858874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.859140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.859205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.859444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.859507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.859752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.859818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.860077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.860157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.860410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.860473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.860708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.860774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.861055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.861120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.861344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.123 [2024-07-15 16:17:30.861407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.123 qpair failed and we were unable to recover it. 00:24:45.123 [2024-07-15 16:17:30.861613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.861679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.861917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.862007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.862315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.862377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.862619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.862681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.862912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.862997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.863296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.863359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.863610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.863673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.863878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.863944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.864219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.864282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.864583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.864646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.864939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.865018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.865266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.865332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.865594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.865658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.865908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.865987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.866239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.866302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.866547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.866610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.866854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.866917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.867183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.867246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.867511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.867573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.867827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.867890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.868192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.868256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.868497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.868559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.868779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.868851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.869151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.869217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.869465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.869528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.869788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.869851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.870144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.870209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.870469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.870532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.870816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.870879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.871145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.871209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.871484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.871547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.871787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.871852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.872069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.872134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.872381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.872444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.872626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.872691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.872931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.873010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.873273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.873337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.873636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.873699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.874016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.874081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.874363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.874428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.874677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.874740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.875055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.875090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.875242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.124 [2024-07-15 16:17:30.875277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.124 qpair failed and we were unable to recover it. 00:24:45.124 [2024-07-15 16:17:30.875404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.875439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.875592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.875654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.875949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.876038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.876181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.876215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.876359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.876393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.876545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.876579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.876726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.876760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.876947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.876989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.877172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.877205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.877346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.877379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.877526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.877559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.877701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.877734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.877860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.877892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.878004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.878038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.878147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.878181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.878300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.878333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.878475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.878508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.878646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.878678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.878788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.878822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.878938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.878979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.879155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.879188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.879308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.879341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.879481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.879513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.879654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.879687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.879807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.879839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.879977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.880025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.880157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.880188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.880327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.880383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.880644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.880706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.880953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.881031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.881149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.881181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.881335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.881368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.881483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.881516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.881659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.881692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.881864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.881926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.882112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.882144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.882246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.882294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.882408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.882441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.882582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.882615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.882766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.882799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.882972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.883022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.883161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.883193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.883326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.883360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.883472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.883506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.125 [2024-07-15 16:17:30.883643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.125 [2024-07-15 16:17:30.883675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.125 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.883791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.883824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.883924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.883967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.884104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.884139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.884241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.884289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.884407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.884440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.884585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.884617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.884827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.884889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.885086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.885119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.885226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.885258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.885419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.885452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.885596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.885628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.885794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.885826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.886036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.886069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.886185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.886217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.886348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.886381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.886523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.886596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.886895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.886972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.887128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.887160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.887286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.887319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.887444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.887477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.887732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.887765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.887909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.887942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.888113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.888145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.888273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.888305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.888414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.888447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.888560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.888593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.888745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.888808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.889020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.889052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.889193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.889232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.889370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.889406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.889611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.889680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.889975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.890028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.890136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.890167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.890317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.890349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.890568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.890631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.890921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.890963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.891108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.891139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.891239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.891271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.891423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.891456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.891618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.891651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.891888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.891949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.892165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.892196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.892380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.892413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.126 qpair failed and we were unable to recover it. 00:24:45.126 [2024-07-15 16:17:30.892561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-07-15 16:17:30.892621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.892824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.892882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.893102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.893134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.893247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.893279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.893428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.893461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.893699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.893757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.894020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.894053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.894189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.894221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.894377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.894410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.894555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.894587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.894700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.894733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.894891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.894949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.895118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.895149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.895296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.895334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.895477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.895510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.895660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.895692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.895799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.895885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.896032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.896064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.896184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.896215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.896329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.896360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.896490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.896522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.896644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.896677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.896877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.896909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.897080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.897113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.897225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.897278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.897520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.897564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.897739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.897814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.898041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.898074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.898182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.898214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.898401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.898463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.898648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.898718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.898936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.898991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.899151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.899182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.899326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.899361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.899562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.899623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.899815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.899886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.900081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.900113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.900224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.900272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.900417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.127 [2024-07-15 16:17:30.900449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.127 qpair failed and we were unable to recover it. 00:24:45.127 [2024-07-15 16:17:30.900630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.900690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.900933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.901023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.901148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.901180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.901425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.901470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.901616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.901687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.901898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.901975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.902138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.902170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.902291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.902322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.902461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.902492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.902647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.902704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.902920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.902977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.903124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.903155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.903291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.903323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.903529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.903574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.903757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.903821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.904060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.904101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.904215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.904276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.904492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.904547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.904711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.904764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.904986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.905039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.905165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.905198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.905319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.905352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.905517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.905570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.905796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.905827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.905995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.906027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.906159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.906191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.906386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.906440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.906652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.906705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.906909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.906974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.907174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.907207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.907440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.907511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.907694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.907748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.908005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.908039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.908190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.908223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.908452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.908485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.908624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.908657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.908846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.908899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.909077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.909112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.909234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.909290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.909493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.909565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.909786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.909840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.910040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.910073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.910215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.910253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.910492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.910562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.128 qpair failed and we were unable to recover it. 00:24:45.128 [2024-07-15 16:17:30.910745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.128 [2024-07-15 16:17:30.910798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.911017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.911051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.911165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.911197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.911305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.911338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.911480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.911534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.911773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.911805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.911898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.911930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.912062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.912096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.912248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.912281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.912387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.912419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.912627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.912659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.912828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.912893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.913108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.913142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.913331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.913408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.913612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.913666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.913849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.913903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.914075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.914108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.914211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.914245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.914449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.914522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.914738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.914794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.915022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.915057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.915196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.915228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.915449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.915481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.915613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.915645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.915859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.915912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.916093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.916131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.916253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.916286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.916453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.916485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.916706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.916739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.916880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.916913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.917101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.917134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.917323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.917356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.917473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.917505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.917678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.917732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.918014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.918047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.918193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.918225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.918427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.918459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.918596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.918628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.918855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.918908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.919123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.919157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.919369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.919441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.919689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.919759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.920031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.920066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.920168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.920202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.920419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.129 [2024-07-15 16:17:30.920489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.129 qpair failed and we were unable to recover it. 00:24:45.129 [2024-07-15 16:17:30.920743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.920797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.921029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.921063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.921192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.921225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.921339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.921372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.921538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.921591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.921781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.921837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.922057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.922091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.922255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.922288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.922561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.922594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.922717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.922750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.922922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.922996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.923131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.923164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.923309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.923342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.923478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.923511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.923711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.923766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.924030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.924065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.924175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.924208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.924420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.924474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.924698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.924752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.924933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.925011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.925126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.925160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.925337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.925391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.925582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.925653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.925862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.925916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.926119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.926152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.926329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.926383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.926605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.926659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.926907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.926972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.927112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.927145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.927247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.927280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.927422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.927454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.927666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.927743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.928005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.928039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.928186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.928220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.928445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.928478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.928630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.928663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.928856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.928910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.929086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.929120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.929234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.929299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.929539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.929611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.929820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.929874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.930075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.930109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.930226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.930288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.930461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.930514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.130 qpair failed and we were unable to recover it. 00:24:45.130 [2024-07-15 16:17:30.930711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.130 [2024-07-15 16:17:30.930764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.931002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.931035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.931145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.931179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.931374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.931447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.931660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.931721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.931943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.932017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.932183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.932216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.932457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.932527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.932714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.932769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.933015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.933049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.933181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.933215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.933428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.933499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.933713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.933768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.934018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.934091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.934347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.934380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.934523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.934555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.934801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.934855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.935111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.935145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.935287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.935321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.935538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.935609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.935798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.935851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.936094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.936127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.936241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.936273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.936477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.936551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.936784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.936838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.937052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.937085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.937230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.937262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.937484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.937556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.937769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.937823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.938070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.938125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.938324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.938397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.938645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.938707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.938933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.938999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.939238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.939309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.939538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.939571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.939711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.939744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.939974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.940006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.940230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.940301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.940555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.131 [2024-07-15 16:17:30.940588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.131 qpair failed and we were unable to recover it. 00:24:45.131 [2024-07-15 16:17:30.940753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.940820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.941045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.941119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.941401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.941471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.941683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.941736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.941994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.942048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.942307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.942338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.942480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.942511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.942716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.942748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.942905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.942936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.943076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.943108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.943283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.943315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.943427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.943460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.943607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.943659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.943773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.943807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.943989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.944022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.944131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.944163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.944381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.944434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.944682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.944735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.944921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.944990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.945217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.945271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.945495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.945526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.945668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.945699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.945807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.945839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.946017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.946050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.946268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.946327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.946445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.946479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.946690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.946722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.946880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.946911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.947092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.947147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.947344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.947398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.947608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.947641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.947755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.947789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.947977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.948032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.948318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.948389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.948632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.948702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.948915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.948979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.949219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.949292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.949556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.949626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.949832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.949864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.950006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.950040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.950181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.950214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.950492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.950564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.950776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.950830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.951098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.951170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.132 [2024-07-15 16:17:30.951447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.132 [2024-07-15 16:17:30.951518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.132 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.951753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.951807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.952038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.952112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.952306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.952339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.952474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.952507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.952655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.952688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.952847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.952901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.953120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.953173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.953411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.953442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.953567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.953598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.953753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.953806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.953991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.954046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.954263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.954335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.954611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.954683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.954891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.954944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.955157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.955228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.955475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.955559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.955746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.955800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.955980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.956034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.956268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.956339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.956585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.956641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.956870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.956903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.957064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.957097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.957212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.957243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.957427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.957505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.957677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.957730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.957946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.958015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.958288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.958359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.958572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.958603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.958745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.958776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.959026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.959082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.959323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.959397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.959689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.959760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.959941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.959982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.960152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.960215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.960409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.960482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.960736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.960790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.961046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.961080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.961217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.961250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.961426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.961507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.961738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.961792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.962006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.962040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.962177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.962209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.962392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.962455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.962709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.962762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.133 [2024-07-15 16:17:30.962935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.133 [2024-07-15 16:17:30.963012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.133 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.963200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.963255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.963468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.963522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.963767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.963820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.964092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.964164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.964440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.964511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.964736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.964789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.965027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.965102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.965375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.965408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.965551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.965583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.965780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.965834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.966029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.966102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.966369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.966441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.966693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.966725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.966868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.966901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.967147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.967221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.967462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.967534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.967743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.967796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.968048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.968123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.968372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.968445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.968682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.968715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.968851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.968884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.969087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.969119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.969279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.969310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.969459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.969512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.969697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.969761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.970001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.970035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.970201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.970234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.970448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.970480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.970615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.970647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.970753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.970786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.970921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.970953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.971104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.971136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.971329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.971385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.971570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.971626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.971851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.971905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.972136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.972192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.972412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.972465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.972652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.972705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.972921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.973009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.973261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.973315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.973550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.973624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.973833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.973896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.974171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.974243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.974511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.974583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.134 qpair failed and we were unable to recover it. 00:24:45.134 [2024-07-15 16:17:30.974802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.134 [2024-07-15 16:17:30.974857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.975127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.975210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.975471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.975526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.975723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.975777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.975943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.976011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.976289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.976360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.976602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.976678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.976946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.977030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.977320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.977392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.977637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.977708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.977892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.977977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.978240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.978318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.978568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.978640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.978868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.978923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.979207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.979279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.979524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.979598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.979829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.979883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.980123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.980195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.980435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.980468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.980607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.980665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.980881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.980935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.981259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.981292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.981406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.981439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.981640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.981713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.981982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.982038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.982327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.982399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.982676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.982758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.982991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.983048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.983330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.983403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.983633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.983704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.983948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.984012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.984133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.984166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.984284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.984317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.984524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.984596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.984775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.984830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.985082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.985155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.985440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.985472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.985631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.985664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.985893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.985948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.135 [2024-07-15 16:17:30.986200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.135 [2024-07-15 16:17:30.986271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.135 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.986506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.986583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.986765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.986818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.987051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.987135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.987390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.987462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.987648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.987719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.987927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.987970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.988149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.988182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.988442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.988514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.988730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.988804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.989038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.989117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.989390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.989423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.989554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.989586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.989765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.989819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.990068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.990142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.990393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.990450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.990671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.990733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.990934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.991007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.991284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.991357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.991596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.991678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.991948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.992019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.992274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.992345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.992526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.992599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.992799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.992853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.993131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.993206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.993421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.993504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.993705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.993760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.994000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.994075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.994290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.994325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.994440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.994473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.994593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.994625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.994840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.994882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.995031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.995093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.995328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.995399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.995631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.995702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.995927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.995993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.996269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.996348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.996616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.996689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.996942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.997025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.997292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.997325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.997497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.997530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.997768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.997854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.998110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.998145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.998251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.998284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.136 [2024-07-15 16:17:30.998428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.136 [2024-07-15 16:17:30.998460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.136 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:30.998658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:30.998736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:30.998922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:30.998993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:30.999200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:30.999271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:30.999525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:30.999599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:30.999774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:30.999829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.000110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.000183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.000440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.000512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.000723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.000776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.000988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.001054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.001306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.001379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.001631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.001702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.001948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.002016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.002211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.002284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.002579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.002654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.002926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.002993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.003284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.003357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.003656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.003728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.003995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.004064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.004378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.004450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.004664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.004735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.004984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.005039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.005283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.005354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.005611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.005685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.005922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.006006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.006261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.006316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.006592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.006625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.006757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.006800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.007003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.007060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.007272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.007344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.007584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.007657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.007839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.007893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.008183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.008259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.008515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.008591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.008823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.008880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.009151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.009225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.009474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.009546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.009776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.009833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.010134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.010168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.010298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.010330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.010491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.010523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.010696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.010750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.010981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.011037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.011278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.011360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.011658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.011734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.137 qpair failed and we were unable to recover it. 00:24:45.137 [2024-07-15 16:17:31.011976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.137 [2024-07-15 16:17:31.012032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.012309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.012381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.012684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.012729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.012876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.012911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.013128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.013201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.013367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.013421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.013640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.013712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.013952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.014033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.014296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.014370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.014623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.014679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.014861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.014894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.015010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.015043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.015229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.015301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.015551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.015622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.015852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.015906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.016166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.016247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.016462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.016534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.016784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.016837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.017017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.017074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.017323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.017358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.017461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.017494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.017669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.017725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.017885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.017939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.018211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.018284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.018495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.018567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.018803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.018837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.018965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.019000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.019139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.019172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.019376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.019457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.019721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.019776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.020022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.020100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.020344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.020418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.020637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.020671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.020838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.020871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.020986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.021020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.021286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.021359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.021648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.021734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.021990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.022048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.022283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.022355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.022641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.022712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.022936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.023022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.023280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.023355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.023592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.023674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.023900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.023973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.024221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.138 [2024-07-15 16:17:31.024297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.138 qpair failed and we were unable to recover it. 00:24:45.138 [2024-07-15 16:17:31.024524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.024580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.024832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.024889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.025166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.025240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.025452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.025524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.025798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.025876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.026166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.026241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.026441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.026513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.026811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.026884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.027225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.027302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.027579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.027652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.027901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.027974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.028254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.028287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.028469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.028535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.028838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.028922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.029198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.029271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.029550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.029622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.029797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.029853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.030121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.030197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.030466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.030540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.030797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.030852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.031095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.031128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.031261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.031293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.031457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.031533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.031772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.031830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.032052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.032136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.032411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.032483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.032710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.032765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.032936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.033004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.033274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.033350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.033552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.033635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.033853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.033907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.034167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.034201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.034335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.034368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.034479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.034511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.034676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.034743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.034979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.035037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.035266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.035298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.035432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.139 [2024-07-15 16:17:31.035464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.139 qpair failed and we were unable to recover it. 00:24:45.139 [2024-07-15 16:17:31.035653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.035706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.035915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.036005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.036346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.036423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.036680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.036755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.036973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.037029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.037307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.037380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.037645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.037678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.037823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.037858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.038085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.038169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.038396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.038469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.038762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.038834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.039117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.039190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.039401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.039488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.039751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.039807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.040014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.040049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.040169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.040209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.040509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.040582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.040837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.040891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.041148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.041221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.041437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.041509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.041713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.041780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.042034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.042113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.042329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.042401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.042582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.042638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.042897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.042951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.043283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.043367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.043623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.043678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.043908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.043976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.044196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.044267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.044504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.044592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.044818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.044883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.045168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.045242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.045501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.045582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.045824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.045857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.045995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.046027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.046228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.046300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.046564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.046636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.046850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.046906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.047176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.047249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.047546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.047618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.047839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.047892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.048158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.048190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.048349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.048382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.140 [2024-07-15 16:17:31.048545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.140 [2024-07-15 16:17:31.048630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.140 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.048826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.048881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.049182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.049256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.049491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.049562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.049787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.049841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.050075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.050148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.050366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.050398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.050536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.050568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.050730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.050765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.051043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.051116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.051358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.051408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.051557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.051596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.051852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.051905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.052130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.052185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.052362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.052415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.052636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.052693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.052904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.052970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.053183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.053237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.053496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.053571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.053787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.053843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.054081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.054155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.054408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.054480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.054684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.054738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.055028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.055112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.055386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.055440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.055628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.055684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.055871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.055928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.056166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.056237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.056510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.056584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.056823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.056877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.057099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.057173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.057414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.057487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.057729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.057803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.058082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.058156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.058416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.058489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.058702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.058770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.059051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.059127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.059341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.059413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.059597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.059658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.059867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.059900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.060045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.060079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.060224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.060256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.060439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.060493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.060706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.060760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.141 qpair failed and we were unable to recover it. 00:24:45.141 [2024-07-15 16:17:31.060975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.141 [2024-07-15 16:17:31.061030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.061280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.061352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.061532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.061565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.061673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.061708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.061841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.061874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.062082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.062139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.062305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.062359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.062590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.062645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.062914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.062989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.063247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.063303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.063553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.063626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.063852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.063907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.064179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.064253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.064501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.064556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.064771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.064841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.065066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.065140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.065383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.065454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.065702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.065734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.065970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.066003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.066164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.066196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.066391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.066469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.066683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.066764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.067009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.067043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.067213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.067247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.067436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.067507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.067741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.067773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.067890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.067923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.068096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.068130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.068256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.068288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.068568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.068640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.068855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.068908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.069149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.069204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.069456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.069528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.069750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.069803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.070027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.070101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.070331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.070405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.070597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.070667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.070851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.070907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.071193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.071248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.071449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.071522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.071776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.071831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.072083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.072160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.072438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.072470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.142 qpair failed and we were unable to recover it. 00:24:45.142 [2024-07-15 16:17:31.072631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-15 16:17:31.072663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.072870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.072925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.073221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.073293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.073495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.073527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.073686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.073719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.073966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.074021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.074284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.074357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.074604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.074676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.074922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.074953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.075100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.075132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.075341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.075414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.075657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.075727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.075942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.076030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.076204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.076266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.076429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.076480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.076701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.076733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.076871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.076904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.077093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.077149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.077439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.077472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.077590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.077627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.077777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.077831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.078074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.078147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.078436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.078509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.078726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.078779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.079059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.079132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.079366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.079438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.079656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.079727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.079911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.079984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.080293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.080366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.080625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.080680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.080932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.080998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.081229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.081300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.081556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.081612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.081845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.081899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.082149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.082223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.082451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.082522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.082772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.082805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.143 qpair failed and we were unable to recover it. 00:24:45.143 [2024-07-15 16:17:31.082945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-15 16:17:31.082989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.083185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.083257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.083538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.083609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.083857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.083911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.084195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.084270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.084554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.084626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.084814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.084868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.085137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.085212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.085499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.085572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.085829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.085893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.086193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.086267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.086525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.086558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.086705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.086739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.086950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.087032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.087266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.087340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.087571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.087604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.087770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.087803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.087990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.088047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.088296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.088368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.088573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.088669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.088891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.088947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.089214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.089287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.089516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.089588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.089811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.089866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.090115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.090193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.090475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.090548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.090766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.090823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.091062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.091135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.091417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.091489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.091727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.091792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.092007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.092064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.092312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.092386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.092613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.092684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.092938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.093007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.093279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.093355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.093640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.093712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.093892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.093932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.094107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.094140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.094274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.094305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.094477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.094510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.094718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.094751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.094895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.094926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.095088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.095122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.095328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.095382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.095619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.095651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.095765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.095797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.144 qpair failed and we were unable to recover it. 00:24:45.144 [2024-07-15 16:17:31.096034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.144 [2024-07-15 16:17:31.096091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.096318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.096376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.096620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.096676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.096880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.096933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.097175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.097230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.097410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.097464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.097648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.097714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.097928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.097997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.098252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.098324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.098603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.098675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.098889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.098943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.099225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.099299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.099539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.099615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.099838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.099893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.100124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.100179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.100469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.100541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.100786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.100843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.101100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.101175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.101471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.101543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.101757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.101812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.102008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.102065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.102344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.102424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.102671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.102743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.102990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.103046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.103253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.103326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.103608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.103681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.103908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.103977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.104229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.104261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.104423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.104454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.104616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.104691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.104943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.105015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.105274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.105361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.105620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.105693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.105946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.106014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.106196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.106251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.106482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.106555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.106771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.106855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.107125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.107199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.107494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.107567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.107806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.145 [2024-07-15 16:17:31.107860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.145 qpair failed and we were unable to recover it. 00:24:45.145 [2024-07-15 16:17:31.108112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.108200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.108436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.108496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.110052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.110083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.110273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.110327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.110490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.110542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.110667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.110700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.110797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.110824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.110940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.110974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.111156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.111217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.111386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.111438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.111602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.111629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.111752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.111778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.111867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.111893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.112022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.112052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.112169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.112195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.112293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.112321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.112445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.112471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.112591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.112617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.112709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.112739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.112837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.112863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.112946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.112990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.113107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.113139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.113255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.113281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.113396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.113422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.113513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.113539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.113689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.113717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.113837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.113865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.113973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.114011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.114100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.114126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.114237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.114263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.114344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.114372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.114473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.114499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.114599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.114626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.114766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.114792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.114878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.114904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.115033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.115059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.115165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.115193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.115286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.115318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.115418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.115444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.115540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.115566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.115679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.115705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.115823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.115849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.115941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.115978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.116076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.116105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.116200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.116227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.116312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.116342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.116481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.116507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.116596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.116621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.116718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.116745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.116829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.116861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.116986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.117013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.117130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.117156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.117281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.117307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.117402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.117431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.117516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.117542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.117662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.117689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.117814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.117840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.117934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.117968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.118081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.118109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.118236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.118265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.118357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.118384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.118494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.118520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.118606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.118632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.118723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.118749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.118868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.118895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.119016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.119047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.119188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.119214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.119339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.119365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.119484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.119512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.119621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.119648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.119730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.119756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.119871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.119897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.120018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.120045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.120166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.120196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.120289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.120315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.120429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.423 [2024-07-15 16:17:31.120456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.423 qpair failed and we were unable to recover it. 00:24:45.423 [2024-07-15 16:17:31.120553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.120579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.120701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.120727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.120813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.120840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.120939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.120974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.121094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.121122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.121216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.121248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.121367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.121394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.121476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.121504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.121649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.121676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.121796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.121823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.121946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.121989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.122128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.122154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.122270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.122302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.122391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.122417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.122508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.122535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.122650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.122676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.122766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.122792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.122876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.122902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.123023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.123051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.123137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.123169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.123284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.123310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.123392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.123418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.123504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.123531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.123629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.123655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.123805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.123833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.123928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.123962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.124085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.124111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.124265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.124291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.124376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.124403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.124498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.124525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.124675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.124702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.124791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.124818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.124929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.124962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.125085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.125113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.125231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.125258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.125352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.125378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.125465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.125491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.125637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.125668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.125786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.125813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.125901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.125927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.126076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.126102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.126243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.126270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.126357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.126382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.126475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.126500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.126594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.126620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.126733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.126759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.126875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.126901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.127009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.127037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.127155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.127181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.127269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.127295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.127412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.127437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.127559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.127585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.127673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.127700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.127814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.127840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.127953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.127986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.128098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.128124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.128249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.128276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.128373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.128399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.128518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.128544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.128660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.128685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.128830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.128856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.128961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.128988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.129105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.129131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.129272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.129298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.129379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.129409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.129524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.129550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.129645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.129673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.129789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.129815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.129934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.129966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.130075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.130101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.130200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.130225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.130313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.130339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.130477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.130503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.130618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.130645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.130752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.130778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.130877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.130903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.131021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.131048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.131139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.131166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.424 [2024-07-15 16:17:31.131285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.424 [2024-07-15 16:17:31.131311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.424 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.131424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.131450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.131562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.131589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.131703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.131729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.131849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.131876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.131978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.132005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.132113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.132139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.132230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.132256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.132363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.132389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.132471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.132498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.132611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.132637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.132755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.132781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.132924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.132949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.133104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.133134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.133228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.133254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.133367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.133393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.133508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.133535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.133650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.133676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.133786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.133812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.133927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.133952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.134086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.134112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.134226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.134252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.134372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.134398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.134488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.134514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.134598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.134624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.134739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.134764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.134852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.134878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.135024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.135065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.135191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.135219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.135315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.135343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.135459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.135487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.135605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.135631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.135746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.135774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.135892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.135920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.136024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.136052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.136170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.136197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.136449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.136482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.136627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.136659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.136846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.136877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.137044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.137071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.137156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.137188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.137342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.137390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.137572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.137621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.137848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.137875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.137971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.137998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.138090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.138117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.138212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.138239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.138434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.138482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.138723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.138771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.138926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.139002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.139123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.139150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.139249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.139276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.139393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.139419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.139609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.139657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.139852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.139899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.140056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.140084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.140195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.140222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.140368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.140394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.140485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.140511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.140650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.140699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.140935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.141008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.141130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.141157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.141297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.141324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.141498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.141546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.141734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.141783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.142008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.142054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.142174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.142200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.142319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.142361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.142536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.142583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.142861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.425 [2024-07-15 16:17:31.142909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.425 qpair failed and we were unable to recover it. 00:24:45.425 [2024-07-15 16:17:31.143079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.143106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.143200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.143226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.143320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.143347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.143494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.143537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.143692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.143766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.143940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.143972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.144064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.144091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.144255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.144302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.144449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.144505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.144656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.144706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.144876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.144931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.145106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.145133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.145274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.145329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.145520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.145571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.145808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.145856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.146067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.146094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.146190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.146217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.146309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.146336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.146480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.146506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.146702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.146749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.146919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.146945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.147066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.147094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.147241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.147289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.147432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.147491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.147680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.147729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.147907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.147963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.148114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.148142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.148283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.148341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.148501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.148565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.148845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.148909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.149098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.149126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.149223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.149271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.149469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.149496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.149605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.149632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.149848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.149896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.150102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.150151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.150410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.150473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.150721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.150785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.151006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.151055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.151242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.151289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.151438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.151487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.151684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.151732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.151881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.151931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.152141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.152188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.152416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.152463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.152631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.152678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.152887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.152934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.153187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.153235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.153384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.153431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.153655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.153702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.153928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.153995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.154225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.154273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.154475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.154522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.154748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.154795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.155003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.155053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.155241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.155288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.155483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.155531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.155721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.155770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.155991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.156040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.156237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.156286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.156485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.156535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.156714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.156764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.156918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.156975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.157204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.157251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.157446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.157497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.157680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.157727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.157934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.157993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.158197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.158244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.158465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.158512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.158741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.158788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.158976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.159026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.159253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.159285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.159424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.159456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.159654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.159686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.159846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.159878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.160085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.160137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.160348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.160379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.160531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.426 [2024-07-15 16:17:31.160563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.426 qpair failed and we were unable to recover it. 00:24:45.426 [2024-07-15 16:17:31.160749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.160800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.161016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.161068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.161267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.161318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.161530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.161581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.161744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.161796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.162006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.162058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.162266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.162317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.162486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.162538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.162729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.162782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.162950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.163019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.163218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.163269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.163504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.163555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.163731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.163791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.164021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.164054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.164216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.164248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.164440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.164491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.164725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.164776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.164988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.165040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.165272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.165324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.165519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.165570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.165730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.165779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.165951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.166013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.166223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.166273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.166502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.166554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.166786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.166837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.167037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.167088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.167285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.167336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.167540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.167593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.167808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.167859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.168074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.168126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.168323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.168375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.168574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.168627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.168829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.168880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.169056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.169109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.169328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.169379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.169569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.169620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.169769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.169819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.169999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.170060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.170303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.170355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.170584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.170635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.170891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.170985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.171197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.171248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.171420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.171472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.171673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.171725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.171916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.171976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.172220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.172270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.172479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.172530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.172702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.172754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.172999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.173052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.173257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.173309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.173474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.173525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.173695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.173745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.173944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.174014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.174260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.174310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.174490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.174541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.174741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.174792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.174965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.175016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.175252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.175302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.175495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.175546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.175748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.175798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.176000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.176052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.176254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.176304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.176508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.176558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.176763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.176795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.177085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.177139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.177386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.177440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.177672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.177726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.177935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.178004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.178253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.178308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.178518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 16:17:31.178572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.427 qpair failed and we were unable to recover it. 00:24:45.427 [2024-07-15 16:17:31.178793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.178846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.179068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.179123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.179335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.179390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.179604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.179658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.179833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.179891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.180071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.180128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.180315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.180370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.180632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.180686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.180940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.181033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.181314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.181346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.181480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.181511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.181699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.181753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.181969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.182025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.182242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.182274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.182416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.182448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.182619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.182673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.182916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.182982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.183254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.183308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.183494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.183550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.183772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.183826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.184037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.184071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.184181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.184217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.184433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.184487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.184745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.184799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.185030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.185095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.185286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.185341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.185511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.185565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.185811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.185865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.186091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.186146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.186336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.186390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.186641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.186695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.186879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.186927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.187167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.187222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.187393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.187449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.187666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.187722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.187906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.187977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.188235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.188291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.188540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.188594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.188834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.188898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.189154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.189211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.189431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.189488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.189665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.189723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.189946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.190026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.190253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.190308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.190529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.190582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.190835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.190890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.191132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.191189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.191438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.191492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.191691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.191753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.192011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.192082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.192334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.192393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.192584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.192643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.192850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.192904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.193157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.193214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.193484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.193539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.193798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.193852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.194177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.194238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.194445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.194504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.194766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.194824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.195061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.195123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.195388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.195448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.195712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.195770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.195981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.196048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.196294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.196355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.196577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.196636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.196841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.196900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.197168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.197227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.197485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.197544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.197790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.197849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.198097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.198155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.198425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.198483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.198708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.198765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.199000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.199058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.199261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.199323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.199554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.199614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.199880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.199937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.200197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.200261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.428 [2024-07-15 16:17:31.200503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 [2024-07-15 16:17:31.200562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.428 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.200786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.200847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.201123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.201185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.201417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.201478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.201715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.201777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.202014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.202076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.202334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.202394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.202623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.202683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.202865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.202925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.203185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.203244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.203469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.203528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.203759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.203819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.204069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.204138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.204399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.204458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.204674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.204732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.204929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.205001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.205235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.205296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.205559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.205618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.205867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.205932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.206200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.206259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.206434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.206492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.206713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.206772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.206980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.207041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.207316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.207375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.207612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.207673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.207913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.207982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.208228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.208289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.208522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.208582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.208860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.208919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.209178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.209238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.209461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.209520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.209809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.209873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.210188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.210254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.210537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.210601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.210859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.210924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.211214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.211278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.211493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.211560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.211784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.211850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.212111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.212177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.212473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.212538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.212763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.212828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 888418 Killed "${NVMF_APP[@]}" "$@" 00:24:45.429 [2024-07-15 16:17:31.213051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.213117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.213372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.213437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:24:45.429 [2024-07-15 16:17:31.213730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.213793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:45.429 [2024-07-15 16:17:31.214045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.214110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.429 [2024-07-15 16:17:31.214357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.214424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:45.429 [2024-07-15 16:17:31.214709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.214774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.429 [2024-07-15 16:17:31.215019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.215084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.215330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.215394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.215639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.215706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.216027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.216093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.216296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.216359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.216608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.216672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.216915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.217012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.217263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.217329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.217578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.217644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.217930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.218013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.218260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.218326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.218581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.218646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.218857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=888982 00:24:45.429 [2024-07-15 16:17:31.218922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:45.429 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 888982 00:24:45.429 [2024-07-15 16:17:31.219240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 888982 ']' 00:24:45.429 [2024-07-15 16:17:31.219309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.429 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.429 [2024-07-15 16:17:31.219581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.219650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.429 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.429 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.429 [2024-07-15 16:17:31.219967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.220035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.220279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.220350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.220608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.220675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.220925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.429 [2024-07-15 16:17:31.220951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.429 qpair failed and we were unable to recover it. 00:24:45.429 [2024-07-15 16:17:31.221066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.221091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.221206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.221281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.221461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.221487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.221626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.221651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.221853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.221920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.222125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.222152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.222245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.222272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.222416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.222442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.222580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.222606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.222721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.222750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.222842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.222867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.222967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.222993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.223084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.223110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.223201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.223227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.223319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.223345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.223433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.223459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.223579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.223605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.223696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.223721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.223807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.223833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.223944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.223983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.224094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.224119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.224206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.224232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.224342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.224367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.224459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.224484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.224570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.224596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.224671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.224697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.224785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.224811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.224903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.224930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.225039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.225066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.225161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.225189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.225281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.225307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.225394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.225420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.225533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.225558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.225675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.225701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.225790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.225817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.225924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.225949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.226068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.226095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.226181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.226208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.226294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.226320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.226399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.226425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.226552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.226577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.226695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.226721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.226802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.226827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.226944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.226977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.227088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.227113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.227203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.227229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.227349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.227375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.227463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.227490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.227567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.227592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.227699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.227724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.227806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.227832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.227948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.227980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.228092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.228118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.228202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.228227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.228317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.228343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.228457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.228483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.228569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.228594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.228677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.228704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.228819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.228845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.228969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.229000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.229120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.229146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.229269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.229294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.229384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.229409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.229504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.229530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.229618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.229645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.229736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.229761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.229900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.229926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.230028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.230054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.230164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.230190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.230302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.230327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.230420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.230447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.230528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.230555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.230647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.230673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.230765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.230791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.230878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.230904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.230999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.231026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.231122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.231148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.231237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.430 [2024-07-15 16:17:31.231262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.430 qpair failed and we were unable to recover it. 00:24:45.430 [2024-07-15 16:17:31.231342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.231368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.231477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.231503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.231628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.231654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.231729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.231754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.231853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.231892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.231994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.232022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.232113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.232139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.232252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.232279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.232378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.232404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.232497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.232524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.232617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.232644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.232762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.232788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.232906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.232931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.233044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.233070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.233156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.233182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.233288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.233313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.233424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.233450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.233563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.233590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.233696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.233722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.233828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.233854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.233977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.234005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.234095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.234126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.234215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.234242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.234333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.234360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.234452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.234478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.234572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.234600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.234716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.234741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.234822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.234847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.234973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.234999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.235093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.235118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.235202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.235228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.235308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.235344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.235431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.235458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.235589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.235615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.235697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.235722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.235813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.235840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.235936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.235976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.236089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.236114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.236238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.236274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.236365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.236390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.236495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.236522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.236636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.236662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.236779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.236804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.236885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.236910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.237014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.237040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.237149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.237174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.237251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.237275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.237408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.237433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.237516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.237541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.237626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.237652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.237742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.237768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.237846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.237871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.238009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.238037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.238156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.238181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.238308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.238333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.238448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.238474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.238586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.238612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.238695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.238721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.238803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.238829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.238917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.238943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.239045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.239072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.239183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.239213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.239323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.239349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.239463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.239488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.239571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.239598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.239716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.239742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.239823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.239848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.239961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.239987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.240083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.240108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.240190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.240215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.240322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.240347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.240433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.240457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.240533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.240558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.240649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.240673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.240760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.240785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.240893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.240918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.241036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.431 [2024-07-15 16:17:31.241062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.431 qpair failed and we were unable to recover it. 00:24:45.431 [2024-07-15 16:17:31.241155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.241180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.241305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.241330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.241447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.241472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.241584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.241609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.241693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.241718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.241803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.241830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.241916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.241941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.242066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.242092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.242200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.242225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.242316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.242341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.242447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.242472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.242576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.242615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.242709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.242736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.242851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.242877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.242970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.242997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.243112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.243138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.243254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.243279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.243368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.243393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.243519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.243545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.243629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.243654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.243737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.243762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.243847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.243872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.243960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.243987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.244081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.244106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.244214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.244245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.244330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.244355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.244493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.244518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.244634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.244661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.244752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.244778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.244895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.244922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.245054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.245079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.245191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.245216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.245332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.245357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.245475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.245500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.245620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.245645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.245758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.245784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.245868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.245895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.246014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.246040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.246157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.246183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.246296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.246322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.246408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.246433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.246549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.246577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.246694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.246719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.246810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.246835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.246948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.246979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.247070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.247096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.247183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.247208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.247325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.247351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.247470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.247495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.247585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.247610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.247716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.247741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.247827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.247853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.247950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.247982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.248109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.248135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.248248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.248274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.248413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.248439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.248540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.248566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.248650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.248676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.248813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.248839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.248981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.249008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.249095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.249120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.249230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.249255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.249345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.249371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.249453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.249478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.249613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.249643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.249733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.249760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.249882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.249907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.250036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.250063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.250150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.250176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.250287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.250312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.250423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.432 [2024-07-15 16:17:31.250449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.432 qpair failed and we were unable to recover it. 00:24:45.432 [2024-07-15 16:17:31.250528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.250553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.250691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.250717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.250797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.250823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.250912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.250939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.251037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.251063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.251144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.251169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.251282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.251308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.251428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.251453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.251569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.251594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.251679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.251706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.251820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.251846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.251953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.251984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.252097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.252122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.252208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.252235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.252371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.252397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.252480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.252509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.252627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.252652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.252736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.252761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.252851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.252876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.252965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.253002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.253137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.253179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.253339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.253366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.253446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.253471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.253585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.253610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.253704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.253730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.253839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.253864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.253979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.254006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.254095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.254122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.254209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.254235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.254320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.254345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.254464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.254490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.254580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.254606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.254702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.254727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.254848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.254876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.254973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.254999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.255089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.255116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.255225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.255251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.255343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.255369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.255492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.255519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.255610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.255635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.255747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.255773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.255860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.255885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.255970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.255996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.256107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.256132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.256250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.256275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.256359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.256384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.256501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.256526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.256610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.256638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.256724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.256750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.256840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.256866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.256990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.257016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.257129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.257155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.257259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.257284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.257370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.257397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.257486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.257511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.257602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.257628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.257743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.257770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.257895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.257920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.258028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.258053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.258155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.258181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.258276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.258305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.258390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.258415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.258527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.258552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.258668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.258693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.258774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.258799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.258901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.258927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.259019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.259045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.259157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.259183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.259275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.259300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.259385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.259409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.259495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.259519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.259643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.259668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.259762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.259787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.259900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.259925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.260055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.260083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.260177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.260204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.260315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.260340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.260429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.433 [2024-07-15 16:17:31.260455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.433 qpair failed and we were unable to recover it. 00:24:45.433 [2024-07-15 16:17:31.260564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.260590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.260675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.260701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.260788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.260814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.260907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.260932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.261029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.261054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.261170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.261196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.261290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.261315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.261414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.261439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.261579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.261604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.261703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.261747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.261835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.261861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.261944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.261978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.262090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.262116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.262196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.262221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.262307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.262331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.262444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.262471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.262563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.262589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.262681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.262707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.262796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.262823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.262940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.262983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.263077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.263103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.263213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.263239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.263331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.263356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.263442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.263469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.263588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.263614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.263727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.263752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.263842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.263867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.263975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.264004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.264087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.264112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.264217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.264243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.264341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.264366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.264449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.264476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.264598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.264625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.264766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.264792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.264879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.264904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.265023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.265049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.265145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.265171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.265274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.265299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.265389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.265415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.265495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.265520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.265634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.265659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.265750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.265775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.265862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.265887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.266007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.266033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.266148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.266174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.266273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.266298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.266400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.266425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.266512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.266537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.266649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.266676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.266766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.266796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.266909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.266936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.267033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.267059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.267167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.267192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.267279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.267304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.267385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.267410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.267493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.267519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.267595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.267620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.267706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.267731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.267839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.267864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.267972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.268018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.268118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.268145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.268195] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:24:45.434 [2024-07-15 16:17:31.268241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.268268] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.434 [2024-07-15 16:17:31.268269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.268367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.268391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.268498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.268522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.268627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.268651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.268773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.268799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.268929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.268962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.269082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.269107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.269218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.269245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.269330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.269355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.269446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.434 [2024-07-15 16:17:31.269471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.434 qpair failed and we were unable to recover it. 00:24:45.434 [2024-07-15 16:17:31.269607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.269634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.269749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.269774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.269893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.269920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.270018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.270043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.270138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.270163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.270250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.270284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.270366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.270391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.270508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.270535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.270616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.270641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.270721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.270746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.270839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.270878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.271014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.271041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.271120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.271145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.271289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.271315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.271406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.271432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.271519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.271544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.271657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.271683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.271773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.271804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.271916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.271941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.272050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.272075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.272157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.272182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.272265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.272290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.272374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.272400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.272487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.272513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.272595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.272620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.272725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.272752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.272865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.272891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.272988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.273016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.273105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.273132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.273224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.273251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.273335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.273360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.273447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.273472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.273584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.273610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.273705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.273730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.273817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.273844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.273928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.273953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.274070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.274095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.274202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.274227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.274324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.274349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.274453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.274478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.274562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.274587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.274682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.274707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.274796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.274820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.274897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.274922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.275032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.275072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.275197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.275225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.275318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.275349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.275461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.275488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.275604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.275631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.275742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.275769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.275877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.275903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.275995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.276020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.276104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.276129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.276215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.276240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.276379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.276403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.276499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.276527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.276642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.276668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.276762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.276791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.276878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.276905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.276994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.277021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.277095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.277121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.277207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.277234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.277315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.277340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.277452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.277479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.277572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.277597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.277719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.277748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.277834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.277861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.277951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.277983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.278074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.278100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.278187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.278213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.278303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.278328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.278408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.278435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.278520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.278546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.278655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.278681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.278769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.278796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.278911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.278937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.279067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.279097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.435 [2024-07-15 16:17:31.279216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.435 [2024-07-15 16:17:31.279242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.435 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.279363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.279389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.279503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.279530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.279618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.279644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.279754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.279780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.279861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.279895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.279986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.280013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.280141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.280183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.280283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.280310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.280424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.280451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.280592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.280618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.280730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.280756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.280878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.280903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.281003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.281030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.281145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.281170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.281270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.281296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.281385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.281413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.281504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.281530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.281621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.281646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.281726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.281753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.281870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.281895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.282004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.282032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.282129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.282159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.282270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.282309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.282433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.282464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.282582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.282614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.282741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.282767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.282863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.282890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.283014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.283042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.283127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.283153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.283249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.283275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.283390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.283415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.283501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.283529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.283647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.283673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.283794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.283821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.283903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.283929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.284027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.284055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.284144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.284171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.284269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.284296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.284432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.284458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.284548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.284574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.284672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.284700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.284786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.284812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.284901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.284927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.285047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.285073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.285166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.285193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.285292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.285318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.285397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.285426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.285551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.285577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.285655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.285681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.285795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.285824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.285939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.285975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.286075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.286101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.286187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.286213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.286308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.286335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.286474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.286500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.286622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.286648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.286759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.286784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.286904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.286942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.287076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.287103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.287192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.287217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.287317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.287342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.287460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.287487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.287578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.287604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.287696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.287722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.287816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.287841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.287930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.287960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.288051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.288077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.288184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.288209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.288299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.288325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.288435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.288462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.288547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.288574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.436 [2024-07-15 16:17:31.288668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.436 [2024-07-15 16:17:31.288693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.436 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.288787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.288816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.288937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.288970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.289058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.289084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.289167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.289193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.289314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.289342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.289464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.289493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.289584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.289611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.289726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.289751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.289837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.289862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.290003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.290029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.290119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.290144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.290237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.290263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.290350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.290380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.290473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.290502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.290622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.290653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.290741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.290767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.290861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.290888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.290994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.291021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.291106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.291131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.291218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.291245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.291328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.291354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.291464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.291491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.291574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.291599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.291678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.291703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.291817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.291842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.291928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.291953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.292048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.292073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.292171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.292197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.292323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.292349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.292435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.292461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.292575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.292601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.292711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.292739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.292828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.292853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.292935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.292966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.293060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.293085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.293172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.293200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.293297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.293323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.293433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.293458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.293569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.293595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.293693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.293722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.293811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.293839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.437 [2024-07-15 16:17:31.293984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.437 [2024-07-15 16:17:31.294023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.437 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.294142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.294170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.294292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.294318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.294416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.294442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.294528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.294554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.294664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.294689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.294776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.294801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.294908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.294934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.295053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.295079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.295163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.295189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.295272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.295297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.295414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.295439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.295555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.295580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.295712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.295744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.295839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.295866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.296017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.296046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.296162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.296189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.296316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.296343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.296466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.296492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.296606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.296633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.296746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.296772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.296863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.296890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.296996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.297024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.297134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.297160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.297274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.297299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.297393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.297418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.297512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.297537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.297668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.297696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.297808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.297834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.297916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.297942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.298041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.298067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.298156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.438 [2024-07-15 16:17:31.298183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.438 qpair failed and we were unable to recover it. 00:24:45.438 [2024-07-15 16:17:31.298274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.298300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.298386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.298412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.298528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.298553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.298644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.298670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.298779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.298805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.298917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.298942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.299036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.299061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.299168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.299193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.299280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.299305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.299384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.299410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.299523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.299550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.299639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.299664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.299776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.299802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.299889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.299914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.300000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.300026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.300110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.300135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.300225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.300250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.300339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.300363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.300470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.300495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.300585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.300610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.300705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.300733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.300860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.300904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.301052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.301080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.301170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.301196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.301322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.301347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.301436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.301461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.301572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.301598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.301695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.301723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.301820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.301846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.301936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.301968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.302050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.439 [2024-07-15 16:17:31.302075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.439 qpair failed and we were unable to recover it. 00:24:45.439 [2024-07-15 16:17:31.302158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.302184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.302309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.302336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.302425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.302450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.302542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.302566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.302651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.302676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.302762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.302790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.302873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.302899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.302989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.303015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.303108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.303135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.303254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.303279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.303373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.303399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.303480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.303506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.303622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.303647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.303725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.303751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.303845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.303869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.304002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.304028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.304109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.304134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.304253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.304281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.304367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.304394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.304477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.304502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.304589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.304614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.304728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.304753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.304846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.304872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.304971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.304997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.305088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.305114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.305208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.305234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.305325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.305350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.305437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.440 [2024-07-15 16:17:31.305462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.440 qpair failed and we were unable to recover it. 00:24:45.440 [2024-07-15 16:17:31.305545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 16:17:31.305569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.441 qpair failed and we were unable to recover it. 00:24:45.441 [2024-07-15 16:17:31.305686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 16:17:31.305713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.441 qpair failed and we were unable to recover it. 00:24:45.441 [2024-07-15 16:17:31.305801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 16:17:31.305830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.441 qpair failed and we were unable to recover it. 00:24:45.441 [2024-07-15 16:17:31.305945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 16:17:31.305976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.441 qpair failed and we were unable to recover it. 00:24:45.441 [2024-07-15 16:17:31.306086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 16:17:31.306112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.441 qpair failed and we were unable to recover it. 00:24:45.441 [2024-07-15 16:17:31.306219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 16:17:31.306245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.441 qpair failed and we were unable to recover it. 00:24:45.441 [2024-07-15 16:17:31.306356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 16:17:31.306382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.441 qpair failed and we were unable to recover it. 00:24:45.441 [2024-07-15 16:17:31.306464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 16:17:31.306489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.441 qpair failed and we were unable to recover it. 00:24:45.441 [2024-07-15 16:17:31.306569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 16:17:31.306594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.441 qpair failed and we were unable to recover it. 00:24:45.441 [2024-07-15 16:17:31.306685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 16:17:31.306710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.441 qpair failed and we were unable to recover it. 00:24:45.441 [2024-07-15 16:17:31.306791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 16:17:31.306816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.441 qpair failed and we were unable to recover it. 00:24:45.441 [2024-07-15 16:17:31.306960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 16:17:31.306986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.441 qpair failed and we were unable to recover it. 00:24:45.441 [2024-07-15 16:17:31.307074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 16:17:31.307099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.441 qpair failed and we were unable to recover it. 00:24:45.441 [2024-07-15 16:17:31.307193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 16:17:31.307218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.441 qpair failed and we were unable to recover it. 00:24:45.441 [2024-07-15 16:17:31.307330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 16:17:31.307355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.441 qpair failed and we were unable to recover it. 00:24:45.441 [2024-07-15 16:17:31.307447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 16:17:31.307473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.441 qpair failed and we were unable to recover it. 00:24:45.441 [2024-07-15 16:17:31.307563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 16:17:31.307590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.441 qpair failed and we were unable to recover it. 00:24:45.441 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.442 [2024-07-15 16:17:31.307697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.307723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.307823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.307862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.307985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.308012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.308131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.308157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.308263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.308288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.308401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.308426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.308505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.308531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.308616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.308642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.308736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.308761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.308873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.308899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.308996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.309022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.309102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.309129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.309215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.309241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.309330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.309355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.309432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.309458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.309541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.309567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.309649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.309674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.309781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.309807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.309918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.309943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.310048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.310076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.310185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.310215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.310345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.310383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.310485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.310513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.310608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.310634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.310746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.310771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.310885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.310914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.311005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.311030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.442 [2024-07-15 16:17:31.311120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 16:17:31.311145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.442 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.311258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.311283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.311409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.311433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.311523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.311548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.311630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.311655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.311741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.311766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.311883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.311908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.311999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.312024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.312103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.312128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.312211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.312236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.312332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.312357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.312447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.312474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.312560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.312586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.312664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.312688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.312766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.312791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.312895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.312920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.313002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.313027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.313105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.313130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.313250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.313276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.313408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.313433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.313549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.313574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.313657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.313681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.313766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.313791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.313893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.313918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.314020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.314045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.314137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.314162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.443 qpair failed and we were unable to recover it. 00:24:45.443 [2024-07-15 16:17:31.314251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 16:17:31.314277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.314389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.314414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.314521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.314546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.314698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.314737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.314857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.314885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.315030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.315068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.315163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.315195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.315286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.315312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.315419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.315449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.315531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.315557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.315651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.315691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.315798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.315825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.315920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.315952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.316079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.316106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.316188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.316214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.316330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.316356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.316442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.316468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.316562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.316590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.316707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.316734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.316820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.316845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.316946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.316985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.317079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.317106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.317195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.317222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.317336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.317362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.317451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.317477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.317573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.317599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.317700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.317727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.317844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.317869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.317980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.318006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.318091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.318116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.444 [2024-07-15 16:17:31.318226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.444 [2024-07-15 16:17:31.318252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.444 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.318348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.318373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.318449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.318474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.318590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.318620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.318732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.318758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.318844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.318871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.318963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.318990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.319081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.319107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.319189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.319214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.319328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.319354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.319465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.319490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.319583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.319608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.319688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.319714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.319810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.319848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.319946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.319980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.320081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.320107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.320201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.320227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.320322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.320348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.320436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.320461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.320576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.320602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.320691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.320716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.320831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.320857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.320939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.320981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.321066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.321092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.321182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.321207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.321291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.321317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.321425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.321451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.321561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.321586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.445 qpair failed and we were unable to recover it. 00:24:45.445 [2024-07-15 16:17:31.321696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.445 [2024-07-15 16:17:31.321721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.321798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.321823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.321914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.321940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.322060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.322087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.322169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.322194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.322287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.322312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.322390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.322415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.322501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.322526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.322638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.322677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.322770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.322796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.322918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.322944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.323052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.323078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.323170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.323195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.323283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.323309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.323451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.323477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.323589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.323614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.323707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.323732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.323810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.323835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.323948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.323980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.324075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.324102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.324186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.324213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.324316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.324342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.324464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.324502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.324606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.324633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.324723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.324749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.324841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.324868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.324987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.325012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.325102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.325127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.446 qpair failed and we were unable to recover it. 00:24:45.446 [2024-07-15 16:17:31.325235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.446 [2024-07-15 16:17:31.325260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.447 [2024-07-15 16:17:31.325348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.447 [2024-07-15 16:17:31.325372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.447 [2024-07-15 16:17:31.325481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.447 [2024-07-15 16:17:31.325505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.447 [2024-07-15 16:17:31.325618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.447 [2024-07-15 16:17:31.325642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.447 [2024-07-15 16:17:31.325716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.447 [2024-07-15 16:17:31.325741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.447 [2024-07-15 16:17:31.325853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.447 [2024-07-15 16:17:31.325877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.447 [2024-07-15 16:17:31.325981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.447 [2024-07-15 16:17:31.326010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.447 [2024-07-15 16:17:31.326110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.447 [2024-07-15 16:17:31.326136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.447 [2024-07-15 16:17:31.326259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.447 [2024-07-15 16:17:31.326285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.447 [2024-07-15 16:17:31.326374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.447 [2024-07-15 16:17:31.326399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.447 [2024-07-15 16:17:31.326489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.447 [2024-07-15 16:17:31.326515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.447 [2024-07-15 16:17:31.326600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.447 [2024-07-15 16:17:31.326625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.447 [2024-07-15 16:17:31.326709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.447 [2024-07-15 16:17:31.326735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.447 [2024-07-15 16:17:31.326874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.447 [2024-07-15 16:17:31.326899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.447 [2024-07-15 16:17:31.326999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.447 [2024-07-15 16:17:31.327025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.447 [2024-07-15 16:17:31.327142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.447 [2024-07-15 16:17:31.327168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.447 [2024-07-15 16:17:31.327286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.447 [2024-07-15 16:17:31.327311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.447 [2024-07-15 16:17:31.327429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.447 [2024-07-15 16:17:31.327454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.447 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.327543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.327569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.327666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.327704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.327832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.327859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.327942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.327974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.328057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.328082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.328177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.328202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.328321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.328346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.328462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.328488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.328603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.328628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.328717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.328743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.328854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.328879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.328996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.329023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.329114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.329139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.329230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.329257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.329371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.329397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.329507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.329539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.329630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.329657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.329764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.329802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.329926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.329953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.330047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.330073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.330190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.330216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.330354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.330379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.330467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.330493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.330579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.330605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.330714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.330740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.330820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.330846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.330952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.448 [2024-07-15 16:17:31.331001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.448 qpair failed and we were unable to recover it. 00:24:45.448 [2024-07-15 16:17:31.331094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.331120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.331208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.331234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.331347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.331373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.331492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.331518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.331622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.331662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.331758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.331785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.331910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.331938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.332039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.332064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.332151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.332176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.332299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.332324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.332433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.332458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.332568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.332593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.332675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.332702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.332786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.332813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.332932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.332972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.333063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.333091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.333175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.333201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.333318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.333344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.333448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.333473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.333561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.333586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.333716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.333755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.333851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.333880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.333988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.334016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.334135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.334161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.334237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.334263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.334351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.334376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.334463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.334489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.449 [2024-07-15 16:17:31.334606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.449 [2024-07-15 16:17:31.334631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.449 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.334744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.334774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.334856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.334881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.334996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.335035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.335151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.335177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.335265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.335291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.335377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.335402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.335490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.335516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.335609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.335638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.335752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.335778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.335903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.335932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.336015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.336041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.336130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.336155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.336247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.336271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.336383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.336408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.336528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.336553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.336662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.336689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.336788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.336827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.336915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.336943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.337049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.337075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.337158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.337183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.337278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.337305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.337388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.337414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.337516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.337543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.337630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.337655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.337731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.337756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.337867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.337893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.337990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.338020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.338136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.338168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.338274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.338300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.338411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.338437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.338556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.338581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.450 qpair failed and we were unable to recover it. 00:24:45.450 [2024-07-15 16:17:31.338694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.450 [2024-07-15 16:17:31.338720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.338796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.338822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.338916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.338944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.339047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.339074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.339187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.339212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.339296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.339320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.339442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.339467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.339545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.339570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.339654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.339679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.339760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.339785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.339903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.339930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.340018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.340046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.340139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.340165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.340259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.340284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.340365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.340391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.340475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.340500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.340583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.340608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.340689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.340715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.340796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.340822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.340908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.340933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.340964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:45.451 [2024-07-15 16:17:31.341068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.341095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.341190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.341219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.341309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.341338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.341432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.341459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.341543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.341569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.341662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.341689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.341774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.341800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.341881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.451 [2024-07-15 16:17:31.341907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.451 qpair failed and we were unable to recover it. 00:24:45.451 [2024-07-15 16:17:31.342003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.342030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.342145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.342171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.342260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.342286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.342367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.342392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.342476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.342503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.342614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.342640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.342754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.342780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.342897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.342922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.343010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.343042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.343134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.343160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.343279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.343305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.343392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.343417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.343538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.343564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.343650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.343678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.343765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.343790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.343879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.343905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.343995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.344021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.344107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.344132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.344247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.344272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.344372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.344398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.344486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.344514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.344606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.344631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.344752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.344779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.344874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.344899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.345029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.345055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.345140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.345166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.345280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.345306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.345386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.345412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.345493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.345520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.345607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.345634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.345741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.345779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.345877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.345903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.346025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.452 [2024-07-15 16:17:31.346052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.452 qpair failed and we were unable to recover it. 00:24:45.452 [2024-07-15 16:17:31.346135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.346160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.346305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.346330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.346423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.346449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.346541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.346566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.346692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.346720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.346810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.346836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.346966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.346994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.347108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.347135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.347228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.347253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.347345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.347370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.347483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.347510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.347595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.347620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.347760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.347788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.347904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.347929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.348034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.348059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.348145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.348175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.348262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.348289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.348379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.348404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.348520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.348547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.348626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.348651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.348757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.348782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.348877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.348902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.348989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.349014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.349104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.349130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.349225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.349250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.349339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.349364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.349452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.349477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.349566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.453 [2024-07-15 16:17:31.349593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.453 qpair failed and we were unable to recover it. 00:24:45.453 [2024-07-15 16:17:31.349707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.349733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.349838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.349876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.349965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.349992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.350085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.350111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.350196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.350221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.350336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.350361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.350473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.350500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.350590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.350615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.350705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.350730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.350867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.350892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.350998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.351023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.351106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.351133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.351254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.351279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.351371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.351396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.351543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.351574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.351698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.351723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.351813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.351838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.351936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.351980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.352125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.352150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.352236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.352262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.352377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.352402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.352521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.352547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.352633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.352658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.352745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.352771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.352867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.352906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.353021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.353050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.353149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.353176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.353265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.353291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.353408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.353434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.353522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.353548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.353640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.353666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.353751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.353776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.353887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.353913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.354036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.454 [2024-07-15 16:17:31.354061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.454 qpair failed and we were unable to recover it. 00:24:45.454 [2024-07-15 16:17:31.354145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.354171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.354264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.354289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.354375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.354400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.354484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.354509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.354605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.354629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.354704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.354729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.354806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.354831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.354917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.354944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.355080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.355118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.355232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.355259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.355374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.355400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.355487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.355512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.355595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.355620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.355737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.355764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.355854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.355880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.356002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.356042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.356141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.356169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.356285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.356311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.356403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.356429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.356542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.356567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.356647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.356672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.356765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.356790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.356903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.356928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.357047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.357076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.357175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.357200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.357286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.357313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.357405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.357430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.357523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.357550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.357637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.357662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.357773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.357799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.357891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.357917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.358027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.358052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.358137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.358162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.358252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.358278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.455 [2024-07-15 16:17:31.358395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.455 [2024-07-15 16:17:31.358421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.455 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.358505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.358530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.358613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.358638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.358759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.358784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.358905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.358934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.359055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.359084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.359217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.359255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.359375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.359402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.359491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.359516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.359606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.359631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.359718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.359744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.359822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.359847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.359931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.359962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.360055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.360085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.360171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.360196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.360284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.360311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.360390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.360415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.360527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.360552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.360665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.360690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.360779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.360804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.360881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.360906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.361024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.361050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.361144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.361171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.361281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.361306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.361419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.361444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.361527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.361551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.361670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.361699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.361819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.361847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.361938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.361972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.362088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.362113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.456 [2024-07-15 16:17:31.362232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.456 [2024-07-15 16:17:31.362258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.456 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.362341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.362368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.362450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.362476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.362562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.362590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.362680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.362705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.362843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.362870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.362982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.363008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.363147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.363173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.363259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.363285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.363414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.363439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.363528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.363556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.363646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.363672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.363786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.363811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.363929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.363967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.364055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.364080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.364189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.364215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.364322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.364347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.364440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.364466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.364550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.364577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.364669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.364695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.364783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.364809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.364932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.364986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.365106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.365133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.365222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.365253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.365368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.365393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.365481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.365507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.365618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.365646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.365772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.365800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.365939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.365986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.366110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.366137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.366248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.366275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.366367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.457 [2024-07-15 16:17:31.366394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.457 qpair failed and we were unable to recover it. 00:24:45.457 [2024-07-15 16:17:31.366537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.366563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.366654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.366681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.366771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.366798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.366891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.366917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.367031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.367057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.367143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.367169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.367250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.367277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.367358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.367383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.367479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.367507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.367598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.367627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.367746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.367774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.367863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.367888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.367977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.368004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.368114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.368139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.368227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.368251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.368333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.368358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.368453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.368478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.368562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.368587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.368696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.368727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.368829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.368867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.368992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.369019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.369136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.369161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.369239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.369265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.369377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.369402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.369494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.369521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.369660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.369686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.369788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.369826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.369944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.369979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.370065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.370090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.370187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.458 [2024-07-15 16:17:31.370214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.458 qpair failed and we were unable to recover it. 00:24:45.458 [2024-07-15 16:17:31.370305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.370330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.370416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.370441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.370555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.370580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.370674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.370698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.370811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.370835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.370927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.370971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.371061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.371088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.371208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.371235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.371435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.371461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.371576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.371602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.371692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.371719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.371795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.371821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.371904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.371930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.372050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.372077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.372195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.372220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.372337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.372363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.372454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.372480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.372563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.372589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.372675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.372700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.459 [2024-07-15 16:17:31.372789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.459 [2024-07-15 16:17:31.372814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.459 qpair failed and we were unable to recover it. 00:24:45.460 [2024-07-15 16:17:31.372906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.460 [2024-07-15 16:17:31.372934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.460 qpair failed and we were unable to recover it. 00:24:45.460 [2024-07-15 16:17:31.373052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.460 [2024-07-15 16:17:31.373079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.460 qpair failed and we were unable to recover it. 00:24:45.460 [2024-07-15 16:17:31.373169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.460 [2024-07-15 16:17:31.373194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.460 qpair failed and we were unable to recover it. 00:24:45.460 [2024-07-15 16:17:31.373275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.460 [2024-07-15 16:17:31.373300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.460 qpair failed and we were unable to recover it. 00:24:45.460 [2024-07-15 16:17:31.373411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.460 [2024-07-15 16:17:31.373436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.460 qpair failed and we were unable to recover it. 00:24:45.460 [2024-07-15 16:17:31.373523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.460 [2024-07-15 16:17:31.373548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.460 qpair failed and we were unable to recover it. 00:24:45.460 [2024-07-15 16:17:31.373637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.460 [2024-07-15 16:17:31.373662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.460 qpair failed and we were unable to recover it. 00:24:45.460 [2024-07-15 16:17:31.373743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.460 [2024-07-15 16:17:31.373768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1254000b90 with addr=10.0.0.2, port=4420 00:24:45.460 qpair failed and we were unable to recover it. 00:24:45.460 [2024-07-15 16:17:31.373909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.460 [2024-07-15 16:17:31.373935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.460 qpair failed and we were unable to recover it. 00:24:45.460 [2024-07-15 16:17:31.374039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.460 [2024-07-15 16:17:31.374067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.460 qpair failed and we were unable to recover it. 00:24:45.460 [2024-07-15 16:17:31.374180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.460 [2024-07-15 16:17:31.374206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.460 qpair failed and we were unable to recover it. 00:24:45.460 [2024-07-15 16:17:31.374294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.460 [2024-07-15 16:17:31.374320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.460 qpair failed and we were unable to recover it. 00:24:45.460 [2024-07-15 16:17:31.374435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.460 [2024-07-15 16:17:31.374462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.460 qpair failed and we were unable to recover it. 00:24:45.460 [2024-07-15 16:17:31.374580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.460 [2024-07-15 16:17:31.374607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.374694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.374721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.374830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.374856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.374971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.374998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.375109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.375135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.375232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.375258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.375377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.375402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.375521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.375548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.375636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.375662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.375777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.375803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.375893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.375919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.376044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.376071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.376156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.376182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.376299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.376325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.376409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.376437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.376526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.376553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.376670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.376697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.376822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.376848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.376966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.376993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.377080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.377106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.377190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.377215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.377299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.377325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.377463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.377493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.377611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.377637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.377725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.377752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.377873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.377899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.378018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.378044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.378158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.378184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.378262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.378288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.378379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.378406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.378496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.378524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.378662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.378688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.378803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.461 [2024-07-15 16:17:31.378829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.461 qpair failed and we were unable to recover it. 00:24:45.461 [2024-07-15 16:17:31.378943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.378975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.379057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.379083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.379171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.379198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.379293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.379319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.379408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.379434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.379524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.379549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.379637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.379663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.379751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.379776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.379867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.379893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.379975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.380001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.380090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.380116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.380232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.380258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.380353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.380380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.380470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.380496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.380614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.380640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.380727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.380756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.380873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.380900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.380992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.381018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.381134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.381161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.381251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.381276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.381354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.381379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.381466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.381491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.381576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.381601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.381690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.381716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.381831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.381856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.381947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.381977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.382071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.382096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.382183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.382208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.382294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.382318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.382402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.462 [2024-07-15 16:17:31.382435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.462 qpair failed and we were unable to recover it. 00:24:45.462 [2024-07-15 16:17:31.382533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.463 [2024-07-15 16:17:31.382558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.463 qpair failed and we were unable to recover it. 00:24:45.463 [2024-07-15 16:17:31.382644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.463 [2024-07-15 16:17:31.382672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.463 qpair failed and we were unable to recover it. 00:24:45.463 [2024-07-15 16:17:31.382758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.463 [2024-07-15 16:17:31.382785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.463 qpair failed and we were unable to recover it. 00:24:45.463 [2024-07-15 16:17:31.382873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.463 [2024-07-15 16:17:31.382901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.463 qpair failed and we were unable to recover it. 00:24:45.463 [2024-07-15 16:17:31.383018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.463 [2024-07-15 16:17:31.383044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.463 qpair failed and we were unable to recover it. 00:24:45.463 [2024-07-15 16:17:31.383129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.463 [2024-07-15 16:17:31.383155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.463 qpair failed and we were unable to recover it. 00:24:45.463 [2024-07-15 16:17:31.383268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.463 [2024-07-15 16:17:31.383294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.463 qpair failed and we were unable to recover it. 00:24:45.463 [2024-07-15 16:17:31.383444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.463 [2024-07-15 16:17:31.383470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.463 qpair failed and we were unable to recover it. 00:24:45.463 [2024-07-15 16:17:31.383613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.463 [2024-07-15 16:17:31.383638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.463 qpair failed and we were unable to recover it. 00:24:45.463 [2024-07-15 16:17:31.383724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.463 [2024-07-15 16:17:31.383749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.463 qpair failed and we were unable to recover it. 00:24:45.463 [2024-07-15 16:17:31.383862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.463 [2024-07-15 16:17:31.383888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.463 qpair failed and we were unable to recover it. 00:24:45.463 [2024-07-15 16:17:31.383976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.463 [2024-07-15 16:17:31.384004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.463 qpair failed and we were unable to recover it. 00:24:45.463 [2024-07-15 16:17:31.384085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.463 [2024-07-15 16:17:31.384112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.463 qpair failed and we were unable to recover it. 00:24:45.463 [2024-07-15 16:17:31.384211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.463 [2024-07-15 16:17:31.384238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.463 qpair failed and we were unable to recover it. 00:24:45.463 [2024-07-15 16:17:31.384352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.463 [2024-07-15 16:17:31.384379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.463 qpair failed and we were unable to recover it. 00:24:45.463 [2024-07-15 16:17:31.384467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.463 [2024-07-15 16:17:31.384493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.464 qpair failed and we were unable to recover it. 00:24:45.464 [2024-07-15 16:17:31.384581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.464 [2024-07-15 16:17:31.384609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.464 qpair failed and we were unable to recover it. 00:24:45.464 [2024-07-15 16:17:31.384701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.464 [2024-07-15 16:17:31.384728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.464 qpair failed and we were unable to recover it. 00:24:45.464 [2024-07-15 16:17:31.384811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.464 [2024-07-15 16:17:31.384836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.464 qpair failed and we were unable to recover it. 00:24:45.464 [2024-07-15 16:17:31.384922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.464 [2024-07-15 16:17:31.384948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.464 qpair failed and we were unable to recover it. 00:24:45.464 [2024-07-15 16:17:31.385073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.464 [2024-07-15 16:17:31.385098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.464 qpair failed and we were unable to recover it. 00:24:45.464 [2024-07-15 16:17:31.385184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.464 [2024-07-15 16:17:31.385209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.464 qpair failed and we were unable to recover it. 00:24:45.464 [2024-07-15 16:17:31.385296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.464 [2024-07-15 16:17:31.385321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.464 qpair failed and we were unable to recover it. 00:24:45.464 [2024-07-15 16:17:31.385412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.464 [2024-07-15 16:17:31.385438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.464 qpair failed and we were unable to recover it. 00:24:45.464 [2024-07-15 16:17:31.385515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.464 [2024-07-15 16:17:31.385541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.464 qpair failed and we were unable to recover it. 00:24:45.464 [2024-07-15 16:17:31.385635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.464 [2024-07-15 16:17:31.385660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.464 qpair failed and we were unable to recover it. 00:24:45.464 [2024-07-15 16:17:31.385743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.464 [2024-07-15 16:17:31.385773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.385859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.385885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.385970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.385996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.386103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.386129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.386204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.386229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.386314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.386339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.386418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.386443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.386565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.386590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.386709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.386734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.386820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.386844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.386931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.386963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.387078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.387104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.387213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.387238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.387348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.387373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.387489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.387514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.387592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.387617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.387703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.387728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.387810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.387835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.387954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.387985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.388077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.388103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.388189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.388215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.388331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.388357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.388452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.388477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.388590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.388615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.388703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.388728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.388833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.388858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.388939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.388971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.389081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.389111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.389233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.389258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.389337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.389362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.389449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.389475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.389590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.389615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.389729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.389753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.389873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.389899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.390036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.390062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.390177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.390202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.390318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.390343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.390455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.390480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.390558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.390583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.390662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.390688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.390782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.390808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.390915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.390963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.391063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.391091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.391207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.391232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.391371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.391398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.391488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.391514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.391630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.391656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.391769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.391794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.391882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.391910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.392009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.392036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.392132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.465 [2024-07-15 16:17:31.392157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.465 qpair failed and we were unable to recover it. 00:24:45.465 [2024-07-15 16:17:31.392249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.392274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.392367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.392393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.392477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.392503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.392585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.392615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.392729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.392754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.392869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.392895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.392985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.393010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.393097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.393122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.393231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.393256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.393346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.393372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.393487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.393514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.393600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.393626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.393737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.393764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.393876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.393902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.393983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.394009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.394101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.394127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.394216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.394242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.394338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.394364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.394456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.394482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.394596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.394621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.394705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.394730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.394826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.394851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.394939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.394969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.395053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.395078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.395191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.395216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.395329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.395354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.395457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.395482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.395595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.395620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.395701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.395727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.395814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.395838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.395952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.395986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.396071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.396097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.396210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.396235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.396357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.396382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.396503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.396529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.396637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.396662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.396777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.396804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.396897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.396922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.397022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.397048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.397139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.397164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.397280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.397307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.397419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.397445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.397525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.397550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.397661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.397686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.397792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.397831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.397960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.397989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.398121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.398147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.398267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.398295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.398435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.398462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.398573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.398599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.398796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.398823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.398920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.398947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.399066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.399093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.399202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.399228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.399345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.399372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.399510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.399536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.399645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.399673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.399766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.399795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.399897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.399935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.400043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.400071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.467 [2024-07-15 16:17:31.400164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.467 [2024-07-15 16:17:31.400190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.467 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.400281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.400307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.400401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.400428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.400541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.400566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.400675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.400701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.400791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.400816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.400930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.400965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.401055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.401082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.401178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.401204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.401300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.401326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.401409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.401435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.401535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.401561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.401648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.401677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.401799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.401826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.401906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.401933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.402051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.402077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.402175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.402201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.402318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.402344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.402436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.402462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.402571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.402597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.402701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.402727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.402815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.402842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.402964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.402990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.403087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.403114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.403237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.403265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.403353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.403386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.403503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.403529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.403616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.403643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.403730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.403757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.403853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.403879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.403990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.404016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.404112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.404138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.404228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.404253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.404328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.404354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.404442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.404468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.404563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.404589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.404687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.404712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.404803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.404833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.404963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.404990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.405079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.405105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.405205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.405231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.405358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.405384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.405469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.405495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.405589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.405617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.405736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.405762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.405849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.405876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.405979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.406005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.406128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.406153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.406312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.406338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.406448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.406474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.468 qpair failed and we were unable to recover it. 00:24:45.468 [2024-07-15 16:17:31.406556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.468 [2024-07-15 16:17:31.406582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.406705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.406734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.406848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.406874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.406965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.406993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.407082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.407108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.407192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.407218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.407315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.407341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.407429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.407454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.407565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.407590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.407675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.407702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.407792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.407819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.407902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.407929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.408083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.408123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.408218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.408245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.408383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.408422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.408546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.408574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.408668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.408696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.408791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.408818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.408939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.408975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.409095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.409121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.409212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.409238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.409324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.409351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.409442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.409469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.409564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.409590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.409694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.409722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.409831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.409857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f124c000b90 with addr=10.0.0.2, port=4420 00:24:45.734 qpair failed and we were unable to recover it. 00:24:45.734 [2024-07-15 16:17:31.409947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.734 [2024-07-15 16:17:31.409997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.410090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.410115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.410214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.410239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.410327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.410353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.410447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.410474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.410569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.410595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.410687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.410714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.410829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.410855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.410943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.410976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.411061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.411088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.411180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.411205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.411320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.411346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.411429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.411455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.411578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.411604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.411718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.411744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.411829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.411855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.411975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.412002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.412097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.412123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.412231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.412257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.412394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.412420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.412534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.412561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.412647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.412674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.412794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.412820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.412924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.412950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.413044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.413071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.413180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.413206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.413298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.413326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.413408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.413434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.413546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.413576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.413670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.413696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.413779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.413804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.413879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.413904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.413999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.414025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.414106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.414131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.414217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.414244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.414322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.414347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.414430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.414456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.414550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.414575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.414692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.414718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.414791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.414816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.414908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.735 [2024-07-15 16:17:31.414933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.735 qpair failed and we were unable to recover it. 00:24:45.735 [2024-07-15 16:17:31.415060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.415085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.415174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.415199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.415286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.415311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.415407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.415432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.415557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.415582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.415699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.415728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.415839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.415865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.415994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.416021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.416154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.416180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.416304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.416332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.416446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.416471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.416560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.416587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.416684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.416711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.416811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.416838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.416953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.416987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.417073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.417098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.417187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.417214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.417305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.417330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.417444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.417469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.417578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.417603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.417700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.417724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.417808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.417834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.417923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.417949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.418039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.418064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.418148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.418173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.418284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.418309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.418388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.418413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.418493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.418518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.418617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.418643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.418738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.418766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.418863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.418889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.418981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.419008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.419133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.419159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.419244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.419270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.419351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.419377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.419465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.419493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.419613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.419640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1244000b90 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.419756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.419783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.419901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.419926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.420047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.420073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.736 [2024-07-15 16:17:31.420161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.736 [2024-07-15 16:17:31.420186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.736 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.420297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.420330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.420444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.420468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.420553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.420578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.420663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.420688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.420799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.420824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.420909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.420935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.421024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.421049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.421140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.421165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.421273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.421298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.421390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.421415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.421531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.421556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.421642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.421667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.421773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.421798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.421887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.421912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.422012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.422039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.422156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.422181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.422299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.422324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.422415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.422440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.422530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.422555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.422638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.422663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.422777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.422802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.422886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.422911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.423006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.423032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.423142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.423167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.423284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.423310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.423388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.423413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.423489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.423514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.423616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.423645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.423782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.423807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.423894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.423919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.424015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.424040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.424126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.424151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.424228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.424253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.424367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.424392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.424481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.424506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.424614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.424639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.424731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.424756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.424838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.424864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.424939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.424971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.425087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.425113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.737 [2024-07-15 16:17:31.425197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.737 [2024-07-15 16:17:31.425222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.737 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.425342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.425368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.425484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.425509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.425596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.425621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.425757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.425782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.425869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.425894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.425975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.426001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.426095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.426120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.426212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.426237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.426328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.426353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.426448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.426474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.426595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.426628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.426717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.426742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.426860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.426885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.426977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.427003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.427089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.427114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.427195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.427220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.427299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.427324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.427417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.427442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.427526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.427550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.427666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.427691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.427769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.427794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.427876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.427901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.427998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.428023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.428110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.428135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.428249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.428274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.428387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.428412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.428530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.428556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.428647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.428672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.428759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.428784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.428899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.428924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.429014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.429040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.429157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.429182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.429294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.429320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.429426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.738 [2024-07-15 16:17:31.429451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.738 qpair failed and we were unable to recover it. 00:24:45.738 [2024-07-15 16:17:31.429531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.429556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.429644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.429670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.429765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.429790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.429879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.429905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.429998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.430024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.430120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.430146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.430280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.430305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.430424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.430449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.430589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.430614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.430698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.430724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.430828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.430853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.430970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.430995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.431112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.431138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.431222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.431247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.431337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.431363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.431504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.431529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.431614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.431640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.431747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.431772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.431852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.431877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.431974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.431999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.432114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.432143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.432256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.432281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.432372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.432398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.432490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.432514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.432595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.432619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.432708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.432732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.432808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.432833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.432912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.432938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.433044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.433070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.433162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.433187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.433268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.433292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.433380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.433405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.433482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.433507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.433616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.433641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.433727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.433753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.433869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.433894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.434005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.434031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.434120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.434145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.434250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.434276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.434392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.434417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.434511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.739 [2024-07-15 16:17:31.434536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.739 qpair failed and we were unable to recover it. 00:24:45.739 [2024-07-15 16:17:31.434646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.434671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.434805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.434830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.434919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.434944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.435066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.435091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.435180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.435205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.435292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.435317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.435430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.435459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.435589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.435614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.435698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.435723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.435806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.435831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.435923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.435948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.436041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.436066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.436178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.436203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.436282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.436306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.436413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.436438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.436531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.436555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.436638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.436662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.436753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.436777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.436858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.436883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.436972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.436998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.437118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.437144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.437254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.437279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.437396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.437421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.437529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.437554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.437668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.437693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.437798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.437823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.437938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.437970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.438061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.438086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.438171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.438196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.438314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.438339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.438460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.438485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.438566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.438591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.438686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.438711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.438794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.438823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.438914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.438938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.439060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.439086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.439224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.439259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.439338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.439363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.439467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.439492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.439584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.439609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.439724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.740 [2024-07-15 16:17:31.439749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.740 qpair failed and we were unable to recover it. 00:24:45.740 [2024-07-15 16:17:31.439827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.439852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.439996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.440022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.440143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.440169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.440265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.440290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.440404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.440429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.440553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.440578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.440695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.440720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.440835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.440861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.440987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.441013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.441093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.441119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.441208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.441233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.441348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.441373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.441464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.441489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.441626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.441651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.441738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.441763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.441879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.441904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.441996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.442022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.442111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.442136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.442223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.442252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.442335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.442360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.442462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.442487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.442568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.442593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.442704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.442729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.442813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.442838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.442953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.442983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.443065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.443090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.443181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.443206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.443331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.443356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.443461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.443485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.443569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.443594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.443684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.443709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.443828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.443854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.443966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.443993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.444115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.444140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.444225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.444250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.444333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.444359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.444447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.444472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.444553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.444578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.444694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.444719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.444806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.444831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.444939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.444976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.741 qpair failed and we were unable to recover it. 00:24:45.741 [2024-07-15 16:17:31.445085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.741 [2024-07-15 16:17:31.445110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.445200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.445225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.445365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.445390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.445502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.445527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.445643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.445677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.445812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.445838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.445965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.445990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.446067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.446092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.446182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.446207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.446290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.446315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.446428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.446453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.446530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.446555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.446641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.446667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.446746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.446771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.446867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.446892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.447002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.447028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.447109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.447133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.447217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.447241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.447351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.447376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.447454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.447484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.447571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.447606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.447687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.447713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.447813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.447838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.447941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.447978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.448068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.448094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.448181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.448205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.448328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.448353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.448435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.448460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.448542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.448576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.448659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.448684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.448777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.448802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.448920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.448945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.449071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.449094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.449189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.449213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.449340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.449364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.449479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.449505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.742 qpair failed and we were unable to recover it. 00:24:45.742 [2024-07-15 16:17:31.449618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.742 [2024-07-15 16:17:31.449643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.449757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.449782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.449896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.449921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.450046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.450073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.450189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.450214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.450304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.450329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.450445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.450471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.450557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.450582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.450667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.450692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.450806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.450831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.450926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.450972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.451116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.451141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.451223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.451254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.451347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.451372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.451468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.451493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.451573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.451599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.451692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.451729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.451807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.451831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.451971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.451998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.452074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.452099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.452189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.452213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.452301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.452326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.452412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.452437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.452549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.452585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.452706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.452731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.452836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.452871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.452978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.453003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.453097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.453122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.453240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.453270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.453377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.453402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.453491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.453516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.453611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.453636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.453736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.453762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.453852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.453877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.453968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.453995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.454072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.454096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.454178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.454202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.454284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.454309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.454388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.454412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.454491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.454516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.454605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.454631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.743 qpair failed and we were unable to recover it. 00:24:45.743 [2024-07-15 16:17:31.454705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.743 [2024-07-15 16:17:31.454731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.744 qpair failed and we were unable to recover it. 00:24:45.744 [2024-07-15 16:17:31.454804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.744 [2024-07-15 16:17:31.454829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.744 qpair failed and we were unable to recover it. 00:24:45.744 [2024-07-15 16:17:31.454913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.744 [2024-07-15 16:17:31.454937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.744 qpair failed and we were unable to recover it. 00:24:45.744 [2024-07-15 16:17:31.454965] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.744 [2024-07-15 16:17:31.454997] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.744 [2024-07-15 16:17:31.455011] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.744 [2024-07-15 16:17:31.455023] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.744 [2024-07-15 16:17:31.455033] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.744 [2024-07-15 16:17:31.455039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.744 [2024-07-15 16:17:31.455063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.744 qpair failed and we were unable to recover it. 00:24:45.744 [2024-07-15 16:17:31.455147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.744 [2024-07-15 16:17:31.455170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.744 qpair failed and we were unable to recover it. 00:24:45.744 [2024-07-15 16:17:31.455248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.744 [2024-07-15 16:17:31.455271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.744 qpair failed and we were unable to recover it. 00:24:45.744 [2024-07-15 16:17:31.455347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.744 [2024-07-15 16:17:31.455372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.744 qpair failed and we were unable to recover it. 00:24:45.744 [2024-07-15 16:17:31.455455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.744 [2024-07-15 16:17:31.455482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160f200 with addr=10.0.0.2, port=4420 00:24:45.744 qpair failed and we were unable to recover it. 00:24:45.744 A controller has encountered a failure and is being reset. 00:24:45.744 [2024-07-15 16:17:31.455483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:45.744 [2024-07-15 16:17:31.455524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:45.744 [2024-07-15 16:17:31.455638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.744 [2024-07-15 16:17:31.455571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:45.744 [2024-07-15 16:17:31.455575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:45.744 [2024-07-15 16:17:31.455685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161d0e0 with addr=10.0.0.2, port=4420 00:24:45.744 [2024-07-15 16:17:31.455705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161d0e0 is same with the state(5) to be set 00:24:45.744 [2024-07-15 16:17:31.455730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161d0e0 (9): Bad file descriptor 00:24:45.744 [2024-07-15 16:17:31.455755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.744 [2024-07-15 16:17:31.455770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.744 [2024-07-15 16:17:31.455785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.744 Unable to reset the controller. 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.744 Malloc0 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.744 [2024-07-15 16:17:31.628412] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.744 [2024-07-15 16:17:31.656642] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.744 16:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 888571 00:24:46.677 Controller properly reset. 00:24:51.947 Initializing NVMe Controllers 00:24:51.947 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:51.947 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:51.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:51.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:51.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:51.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:51.947 Initialization complete. Launching workers. 00:24:51.947 Starting thread on core 1 00:24:51.947 Starting thread on core 2 00:24:51.947 Starting thread on core 3 00:24:51.947 Starting thread on core 0 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:24:51.947 00:24:51.947 real 0m11.310s 00:24:51.947 user 0m36.613s 00:24:51.947 sys 0m7.539s 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.947 ************************************ 00:24:51.947 END TEST nvmf_target_disconnect_tc2 00:24:51.947 ************************************ 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:51.947 rmmod nvme_tcp 00:24:51.947 rmmod nvme_fabrics 00:24:51.947 rmmod nvme_keyring 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 888982 ']' 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 888982 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 888982 ']' 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 888982 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 888982 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 888982' 00:24:51.947 killing process with pid 888982 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 888982 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 888982 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.947 16:17:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.856 16:17:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:53.856 00:24:53.856 real 0m16.013s 00:24:53.856 user 1m1.449s 00:24:53.856 sys 0m9.970s 00:24:53.856 16:17:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:53.856 16:17:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:53.856 ************************************ 00:24:53.856 END TEST nvmf_target_disconnect 00:24:53.856 ************************************ 00:24:53.856 16:17:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:53.856 16:17:39 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:24:53.856 16:17:39 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:53.856 16:17:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:53.856 16:17:39 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:24:53.856 00:24:53.856 real 19m9.371s 00:24:53.856 user 45m23.375s 00:24:53.856 sys 4m55.581s 00:24:53.856 16:17:39 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:53.856 16:17:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:53.856 ************************************ 00:24:53.856 END TEST nvmf_tcp 00:24:53.856 ************************************ 00:24:54.114 16:17:39 -- common/autotest_common.sh@1142 -- # return 0 00:24:54.114 16:17:39 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:24:54.114 16:17:39 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:54.114 16:17:39 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:54.114 16:17:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:54.114 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:54.114 ************************************ 00:24:54.114 START TEST spdkcli_nvmf_tcp 00:24:54.114 ************************************ 00:24:54.114 16:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:54.114 * Looking for test storage... 00:24:54.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:24:54.114 16:17:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:24:54.114 16:17:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:54.114 16:17:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:24:54.114 16:17:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.114 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:24:54.114 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.114 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.114 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.114 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=890173 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 890173 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 890173 ']' 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:54.115 16:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:54.115 [2024-07-15 16:17:40.003404] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:24:54.115 [2024-07-15 16:17:40.003515] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid890173 ] 00:24:54.115 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.115 [2024-07-15 16:17:40.066081] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:54.374 [2024-07-15 16:17:40.174481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.374 [2024-07-15 16:17:40.174484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.374 16:17:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:54.374 16:17:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:24:54.374 16:17:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:54.374 16:17:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:54.374 16:17:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:54.374 16:17:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:54.374 16:17:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:54.374 16:17:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:54.374 16:17:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:54.374 16:17:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:54.374 16:17:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:54.374 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:54.374 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:54.374 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:54.374 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:54.374 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:54.374 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:54.374 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:54.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:54.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:54.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:54.374 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:54.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:54.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:54.374 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:54.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:54.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:54.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:54.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:54.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:54.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:54.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:54.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:54.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:54.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:54.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:54.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:54.374 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:54.374 ' 00:24:56.910 [2024-07-15 16:17:42.835719] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.287 [2024-07-15 16:17:44.063949] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:00.821 [2024-07-15 16:17:46.318954] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:02.726 [2024-07-15 16:17:48.261043] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:04.104 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:04.104 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:04.104 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:04.104 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:04.104 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:04.104 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:04.104 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:04.104 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:04.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:04.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:04.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:04.104 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:04.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:04.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:04.104 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:04.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:04.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:04.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:04.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:04.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:04.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:04.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:04.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:04.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:04.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:04.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:04.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:04.105 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:04.105 16:17:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:04.105 16:17:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:04.105 16:17:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:04.105 16:17:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:04.105 16:17:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:04.105 16:17:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:04.105 16:17:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:04.105 16:17:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:04.362 16:17:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:04.362 16:17:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:04.362 16:17:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:04.362 16:17:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:04.362 16:17:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:04.619 16:17:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:04.619 16:17:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:04.619 16:17:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:04.619 16:17:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:04.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:04.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:04.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:04.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:04.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:04.619 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:04.619 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:04.619 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:04.619 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:04.619 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:04.619 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:04.619 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:04.619 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:04.619 ' 00:25:09.891 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:09.891 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:09.891 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:09.891 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:09.891 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:09.891 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:09.891 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:09.891 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:09.891 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:09.891 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:09.891 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:09.891 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:09.891 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:09.891 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:09.891 16:17:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:09.891 16:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:09.891 16:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:09.891 16:17:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 890173 00:25:09.891 16:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 890173 ']' 00:25:09.891 16:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 890173 00:25:09.891 16:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:25:09.891 16:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:09.891 16:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 890173 00:25:09.891 16:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:09.891 16:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:09.891 16:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 890173' 00:25:09.891 killing process with pid 890173 00:25:09.891 16:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 890173 00:25:09.891 16:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 890173 00:25:10.150 16:17:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:10.150 16:17:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:10.150 16:17:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 890173 ']' 00:25:10.150 16:17:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 890173 00:25:10.150 16:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 890173 ']' 00:25:10.150 16:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 890173 00:25:10.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (890173) - No such process 00:25:10.150 16:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 890173 is not found' 00:25:10.150 Process with pid 890173 is not found 00:25:10.150 16:17:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:10.150 16:17:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:10.150 16:17:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:10.150 00:25:10.150 real 0m16.064s 00:25:10.150 user 0m33.901s 00:25:10.150 sys 0m0.821s 00:25:10.150 16:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:10.150 16:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:10.150 ************************************ 00:25:10.150 END TEST spdkcli_nvmf_tcp 00:25:10.150 ************************************ 00:25:10.150 16:17:55 -- common/autotest_common.sh@1142 -- # return 0 00:25:10.150 16:17:55 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:10.150 16:17:55 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:10.150 16:17:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:10.150 16:17:55 -- common/autotest_common.sh@10 -- # set +x 00:25:10.150 ************************************ 00:25:10.150 START TEST nvmf_identify_passthru 00:25:10.150 ************************************ 00:25:10.150 16:17:55 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:10.151 * Looking for test storage... 00:25:10.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:10.151 16:17:56 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.151 16:17:56 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.151 16:17:56 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.151 16:17:56 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.151 16:17:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.151 16:17:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.151 16:17:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.151 16:17:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:10.151 16:17:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:10.151 16:17:56 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.151 16:17:56 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.151 16:17:56 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.151 16:17:56 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.151 16:17:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.151 16:17:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.151 16:17:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.151 16:17:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:10.151 16:17:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.151 16:17:56 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.151 16:17:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:10.151 16:17:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:10.151 16:17:56 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:10.151 16:17:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:12.688 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:12.688 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:12.688 Found net devices under 0000:09:00.0: cvl_0_0 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:12.688 Found net devices under 0000:09:00.1: cvl_0_1 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:12.688 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:12.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:12.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:25:12.689 00:25:12.689 --- 10.0.0.2 ping statistics --- 00:25:12.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.689 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:12.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:12.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:25:12.689 00:25:12.689 --- 10.0.0.1 ping statistics --- 00:25:12.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.689 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:12.689 16:17:58 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:12.689 16:17:58 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:12.689 16:17:58 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:12.689 16:17:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:12.689 16:17:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:12.689 16:17:58 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:25:12.689 16:17:58 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:25:12.689 16:17:58 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:25:12.689 16:17:58 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:25:12.689 16:17:58 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:25:12.689 16:17:58 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:25:12.689 16:17:58 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:12.689 16:17:58 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:12.689 16:17:58 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:25:12.689 16:17:58 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:25:12.689 16:17:58 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:25:12.689 16:17:58 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:0b:00.0 00:25:12.689 16:17:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:25:12.689 16:17:58 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:25:12.689 16:17:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:25:12.689 16:17:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:12.689 16:17:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:12.689 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.880 16:18:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:25:16.880 16:18:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:25:16.880 16:18:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:16.880 16:18:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:16.880 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.127 16:18:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:21.127 16:18:06 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:21.127 16:18:06 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:21.127 16:18:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:21.127 16:18:06 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:21.127 16:18:06 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:21.127 16:18:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:21.127 16:18:06 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=894794 00:25:21.127 16:18:06 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:21.127 16:18:06 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:21.127 16:18:06 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 894794 00:25:21.127 16:18:06 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 894794 ']' 00:25:21.127 16:18:06 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.127 16:18:06 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:21.127 16:18:06 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.127 16:18:06 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:21.127 16:18:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:21.127 [2024-07-15 16:18:06.711971] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:25:21.127 [2024-07-15 16:18:06.712072] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.127 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.127 [2024-07-15 16:18:06.778872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.127 [2024-07-15 16:18:06.892500] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.127 [2024-07-15 16:18:06.892554] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.127 [2024-07-15 16:18:06.892567] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.127 [2024-07-15 16:18:06.892578] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.127 [2024-07-15 16:18:06.892588] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.127 [2024-07-15 16:18:06.892672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.127 [2024-07-15 16:18:06.892737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.127 [2024-07-15 16:18:06.892780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.127 [2024-07-15 16:18:06.892783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.127 16:18:06 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.127 16:18:06 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:25:21.127 16:18:06 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:21.127 16:18:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.127 16:18:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:21.127 INFO: Log level set to 20 00:25:21.127 INFO: Requests: 00:25:21.127 { 00:25:21.127 "jsonrpc": "2.0", 00:25:21.127 "method": "nvmf_set_config", 00:25:21.127 "id": 1, 00:25:21.127 "params": { 00:25:21.127 "admin_cmd_passthru": { 00:25:21.127 "identify_ctrlr": true 00:25:21.127 } 00:25:21.127 } 00:25:21.127 } 00:25:21.127 00:25:21.127 INFO: response: 00:25:21.127 { 00:25:21.127 "jsonrpc": "2.0", 00:25:21.127 "id": 1, 00:25:21.127 "result": true 00:25:21.127 } 00:25:21.127 00:25:21.127 16:18:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.127 16:18:06 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:21.127 16:18:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.127 16:18:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:21.127 INFO: Setting log level to 20 00:25:21.127 INFO: Setting log level to 20 00:25:21.127 INFO: Log level set to 20 00:25:21.127 INFO: Log level set to 20 00:25:21.128 INFO: Requests: 00:25:21.128 { 00:25:21.128 "jsonrpc": "2.0", 00:25:21.128 "method": "framework_start_init", 00:25:21.128 "id": 1 00:25:21.128 } 00:25:21.128 00:25:21.128 INFO: Requests: 00:25:21.128 { 00:25:21.128 "jsonrpc": "2.0", 00:25:21.128 "method": "framework_start_init", 00:25:21.128 "id": 1 00:25:21.128 } 00:25:21.128 00:25:21.128 [2024-07-15 16:18:07.052146] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:21.128 INFO: response: 00:25:21.128 { 00:25:21.128 "jsonrpc": "2.0", 00:25:21.128 "id": 1, 00:25:21.128 "result": true 00:25:21.128 } 00:25:21.128 00:25:21.128 INFO: response: 00:25:21.128 { 00:25:21.128 "jsonrpc": "2.0", 00:25:21.128 "id": 1, 00:25:21.128 "result": true 00:25:21.128 } 00:25:21.128 00:25:21.128 16:18:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.128 16:18:07 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:21.128 16:18:07 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.128 16:18:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:21.128 INFO: Setting log level to 40 00:25:21.128 INFO: Setting log level to 40 00:25:21.128 INFO: Setting log level to 40 00:25:21.128 [2024-07-15 16:18:07.062183] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.128 16:18:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.128 16:18:07 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:21.128 16:18:07 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:21.128 16:18:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:21.128 16:18:07 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:25:21.128 16:18:07 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.128 16:18:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:24.411 Nvme0n1 00:25:24.411 16:18:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.411 16:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:24.411 16:18:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.411 16:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:24.411 16:18:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.411 16:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:24.411 16:18:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.411 16:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:24.411 16:18:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.411 16:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:24.411 16:18:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.411 16:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:24.411 [2024-07-15 16:18:09.949074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.411 16:18:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.411 16:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:24.411 16:18:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.411 16:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:24.411 [ 00:25:24.411 { 00:25:24.411 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:24.411 "subtype": "Discovery", 00:25:24.411 "listen_addresses": [], 00:25:24.411 "allow_any_host": true, 00:25:24.411 "hosts": [] 00:25:24.411 }, 00:25:24.411 { 00:25:24.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.411 "subtype": "NVMe", 00:25:24.411 "listen_addresses": [ 00:25:24.411 { 00:25:24.411 "trtype": "TCP", 00:25:24.411 "adrfam": "IPv4", 00:25:24.411 "traddr": "10.0.0.2", 00:25:24.411 "trsvcid": "4420" 00:25:24.411 } 00:25:24.411 ], 00:25:24.411 "allow_any_host": true, 00:25:24.411 "hosts": [], 00:25:24.411 "serial_number": "SPDK00000000000001", 00:25:24.411 "model_number": "SPDK bdev Controller", 00:25:24.411 "max_namespaces": 1, 00:25:24.411 "min_cntlid": 1, 00:25:24.411 "max_cntlid": 65519, 00:25:24.411 "namespaces": [ 00:25:24.411 { 00:25:24.411 "nsid": 1, 00:25:24.411 "bdev_name": "Nvme0n1", 00:25:24.411 "name": "Nvme0n1", 00:25:24.411 "nguid": "EDF61771647C4F3D8E0A19C2783C9214", 00:25:24.411 "uuid": "edf61771-647c-4f3d-8e0a-19c2783c9214" 00:25:24.411 } 00:25:24.411 ] 00:25:24.411 } 00:25:24.411 ] 00:25:24.411 16:18:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.411 16:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:24.411 16:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:24.411 16:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:24.411 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.411 16:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:25:24.411 16:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:24.411 16:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:24.411 16:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:24.411 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.411 16:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:24.411 16:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:25:24.411 16:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:24.411 16:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:24.411 16:18:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.411 16:18:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:24.411 16:18:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.411 16:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:24.411 16:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:24.411 16:18:10 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:24.411 16:18:10 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:24.411 16:18:10 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:24.411 16:18:10 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:24.411 16:18:10 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:24.411 16:18:10 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:24.411 rmmod nvme_tcp 00:25:24.411 rmmod nvme_fabrics 00:25:24.411 rmmod nvme_keyring 00:25:24.411 16:18:10 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:24.411 16:18:10 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:24.411 16:18:10 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:24.411 16:18:10 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 894794 ']' 00:25:24.411 16:18:10 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 894794 00:25:24.411 16:18:10 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 894794 ']' 00:25:24.411 16:18:10 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 894794 00:25:24.411 16:18:10 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:25:24.411 16:18:10 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:24.411 16:18:10 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 894794 00:25:24.411 16:18:10 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:24.411 16:18:10 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:24.411 16:18:10 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 894794' 00:25:24.411 killing process with pid 894794 00:25:24.411 16:18:10 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 894794 00:25:24.411 16:18:10 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 894794 00:25:26.313 16:18:11 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:26.313 16:18:11 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:26.313 16:18:11 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:26.313 16:18:11 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:26.313 16:18:11 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:26.313 16:18:11 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.313 16:18:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:26.313 16:18:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.220 16:18:13 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:28.220 00:25:28.220 real 0m17.864s 00:25:28.220 user 0m26.025s 00:25:28.220 sys 0m2.349s 00:25:28.220 16:18:13 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:28.220 16:18:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:28.220 ************************************ 00:25:28.220 END TEST nvmf_identify_passthru 00:25:28.220 ************************************ 00:25:28.220 16:18:13 -- common/autotest_common.sh@1142 -- # return 0 00:25:28.220 16:18:13 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:28.220 16:18:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:28.220 16:18:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.220 16:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:28.220 ************************************ 00:25:28.220 START TEST nvmf_dif 00:25:28.220 ************************************ 00:25:28.220 16:18:13 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:28.220 * Looking for test storage... 00:25:28.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:28.220 16:18:13 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.220 16:18:13 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.220 16:18:13 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.220 16:18:13 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.220 16:18:13 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.220 16:18:13 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.220 16:18:13 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.220 16:18:13 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:28.220 16:18:13 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:28.220 16:18:13 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:28.220 16:18:13 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:28.220 16:18:13 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:28.220 16:18:13 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:28.220 16:18:13 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.220 16:18:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:28.220 16:18:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:28.220 16:18:13 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:28.220 16:18:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:30.119 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:30.119 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:30.119 Found net devices under 0000:09:00.0: cvl_0_0 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:30.119 Found net devices under 0000:09:00.1: cvl_0_1 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.119 16:18:16 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.120 16:18:16 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:30.120 16:18:16 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.120 16:18:16 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.120 16:18:16 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:30.120 16:18:16 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:30.120 16:18:16 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.120 16:18:16 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.120 16:18:16 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.120 16:18:16 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.120 16:18:16 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:30.120 16:18:16 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.377 16:18:16 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.377 16:18:16 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.377 16:18:16 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:30.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:25:30.377 00:25:30.377 --- 10.0.0.2 ping statistics --- 00:25:30.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.377 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:25:30.377 16:18:16 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:25:30.377 00:25:30.377 --- 10.0.0.1 ping statistics --- 00:25:30.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.377 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:25:30.377 16:18:16 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.377 16:18:16 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:30.377 16:18:16 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:30.377 16:18:16 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:31.311 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:31.311 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:31.311 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:31.312 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:31.312 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:31.312 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:31.312 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:31.312 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:31.312 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:31.312 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:31.312 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:31.312 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:31.312 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:31.312 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:31.312 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:31.312 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:31.312 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:31.568 16:18:17 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.568 16:18:17 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:31.568 16:18:17 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:31.568 16:18:17 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.568 16:18:17 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:31.568 16:18:17 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:31.568 16:18:17 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:31.568 16:18:17 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:31.568 16:18:17 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:31.568 16:18:17 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:31.568 16:18:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:31.568 16:18:17 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=898551 00:25:31.568 16:18:17 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:31.568 16:18:17 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 898551 00:25:31.568 16:18:17 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 898551 ']' 00:25:31.568 16:18:17 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.568 16:18:17 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:31.568 16:18:17 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.568 16:18:17 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:31.568 16:18:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:31.568 [2024-07-15 16:18:17.430887] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:25:31.568 [2024-07-15 16:18:17.430974] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.568 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.568 [2024-07-15 16:18:17.496927] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.825 [2024-07-15 16:18:17.605281] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.825 [2024-07-15 16:18:17.605361] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.825 [2024-07-15 16:18:17.605375] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.825 [2024-07-15 16:18:17.605392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.825 [2024-07-15 16:18:17.605401] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.825 [2024-07-15 16:18:17.605426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.825 16:18:17 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:31.825 16:18:17 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:25:31.825 16:18:17 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:31.825 16:18:17 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:31.825 16:18:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:31.825 16:18:17 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.825 16:18:17 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:31.825 16:18:17 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:31.825 16:18:17 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.825 16:18:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:31.825 [2024-07-15 16:18:17.743519] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.825 16:18:17 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.825 16:18:17 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:31.825 16:18:17 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:31.825 16:18:17 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:31.825 16:18:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:31.825 ************************************ 00:25:31.825 START TEST fio_dif_1_default 00:25:31.825 ************************************ 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:31.825 bdev_null0 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:31.825 [2024-07-15 16:18:17.799794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:31.825 { 00:25:31.825 "params": { 00:25:31.825 "name": "Nvme$subsystem", 00:25:31.825 "trtype": "$TEST_TRANSPORT", 00:25:31.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.825 "adrfam": "ipv4", 00:25:31.825 "trsvcid": "$NVMF_PORT", 00:25:31.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.825 "hdgst": ${hdgst:-false}, 00:25:31.825 "ddgst": ${ddgst:-false} 00:25:31.825 }, 00:25:31.825 "method": "bdev_nvme_attach_controller" 00:25:31.825 } 00:25:31.825 EOF 00:25:31.825 )") 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:31.825 16:18:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:31.826 16:18:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:31.826 16:18:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:31.826 "params": { 00:25:31.826 "name": "Nvme0", 00:25:31.826 "trtype": "tcp", 00:25:31.826 "traddr": "10.0.0.2", 00:25:31.826 "adrfam": "ipv4", 00:25:31.826 "trsvcid": "4420", 00:25:31.826 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:31.826 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:31.826 "hdgst": false, 00:25:31.826 "ddgst": false 00:25:31.826 }, 00:25:31.826 "method": "bdev_nvme_attach_controller" 00:25:31.826 }' 00:25:31.826 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:31.826 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:31.826 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:31.826 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:31.826 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:31.826 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:32.083 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:32.083 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:32.083 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:32.083 16:18:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:32.083 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:32.083 fio-3.35 00:25:32.083 Starting 1 thread 00:25:32.341 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.534 00:25:44.534 filename0: (groupid=0, jobs=1): err= 0: pid=898784: Mon Jul 15 16:18:28 2024 00:25:44.534 read: IOPS=190, BW=761KiB/s (779kB/s)(7632KiB/10027msec) 00:25:44.534 slat (nsec): min=6727, max=47724, avg=9341.77, stdev=4311.94 00:25:44.534 clat (usec): min=577, max=42309, avg=20990.86, stdev=20289.75 00:25:44.534 lat (usec): min=584, max=42320, avg=21000.20, stdev=20290.24 00:25:44.534 clat percentiles (usec): 00:25:44.534 | 1.00th=[ 603], 5.00th=[ 652], 10.00th=[ 693], 20.00th=[ 725], 00:25:44.534 | 30.00th=[ 742], 40.00th=[ 758], 50.00th=[ 7635], 60.00th=[41157], 00:25:44.534 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:44.534 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:25:44.534 | 99.99th=[42206] 00:25:44.534 bw ( KiB/s): min= 704, max= 768, per=99.98%, avg=761.60, stdev=19.70, samples=20 00:25:44.534 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:25:44.534 lat (usec) : 750=36.53%, 1000=13.36% 00:25:44.534 lat (msec) : 10=0.21%, 50=49.90% 00:25:44.534 cpu : usr=88.40%, sys=11.33%, ctx=18, majf=0, minf=175 00:25:44.534 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:44.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.534 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.534 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:44.534 00:25:44.534 Run status group 0 (all jobs): 00:25:44.534 READ: bw=761KiB/s (779kB/s), 761KiB/s-761KiB/s (779kB/s-779kB/s), io=7632KiB (7815kB), run=10027-10027msec 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.534 00:25:44.534 real 0m11.138s 00:25:44.534 user 0m10.036s 00:25:44.534 sys 0m1.438s 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:44.534 ************************************ 00:25:44.534 END TEST fio_dif_1_default 00:25:44.534 ************************************ 00:25:44.534 16:18:28 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:25:44.534 16:18:28 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:44.534 16:18:28 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:44.534 16:18:28 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:44.534 16:18:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:44.534 ************************************ 00:25:44.534 START TEST fio_dif_1_multi_subsystems 00:25:44.534 ************************************ 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:44.534 bdev_null0 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:44.534 [2024-07-15 16:18:28.986042] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:44.534 bdev_null1 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.534 16:18:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:44.534 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.534 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:44.534 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.534 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:44.535 { 00:25:44.535 "params": { 00:25:44.535 "name": "Nvme$subsystem", 00:25:44.535 "trtype": "$TEST_TRANSPORT", 00:25:44.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.535 "adrfam": "ipv4", 00:25:44.535 "trsvcid": "$NVMF_PORT", 00:25:44.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.535 "hdgst": ${hdgst:-false}, 00:25:44.535 "ddgst": ${ddgst:-false} 00:25:44.535 }, 00:25:44.535 "method": "bdev_nvme_attach_controller" 00:25:44.535 } 00:25:44.535 EOF 00:25:44.535 )") 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:44.535 { 00:25:44.535 "params": { 00:25:44.535 "name": "Nvme$subsystem", 00:25:44.535 "trtype": "$TEST_TRANSPORT", 00:25:44.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.535 "adrfam": "ipv4", 00:25:44.535 "trsvcid": "$NVMF_PORT", 00:25:44.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.535 "hdgst": ${hdgst:-false}, 00:25:44.535 "ddgst": ${ddgst:-false} 00:25:44.535 }, 00:25:44.535 "method": "bdev_nvme_attach_controller" 00:25:44.535 } 00:25:44.535 EOF 00:25:44.535 )") 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:44.535 "params": { 00:25:44.535 "name": "Nvme0", 00:25:44.535 "trtype": "tcp", 00:25:44.535 "traddr": "10.0.0.2", 00:25:44.535 "adrfam": "ipv4", 00:25:44.535 "trsvcid": "4420", 00:25:44.535 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:44.535 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:44.535 "hdgst": false, 00:25:44.535 "ddgst": false 00:25:44.535 }, 00:25:44.535 "method": "bdev_nvme_attach_controller" 00:25:44.535 },{ 00:25:44.535 "params": { 00:25:44.535 "name": "Nvme1", 00:25:44.535 "trtype": "tcp", 00:25:44.535 "traddr": "10.0.0.2", 00:25:44.535 "adrfam": "ipv4", 00:25:44.535 "trsvcid": "4420", 00:25:44.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:44.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:44.535 "hdgst": false, 00:25:44.535 "ddgst": false 00:25:44.535 }, 00:25:44.535 "method": "bdev_nvme_attach_controller" 00:25:44.535 }' 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:44.535 16:18:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:44.535 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:44.535 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:44.535 fio-3.35 00:25:44.535 Starting 2 threads 00:25:44.535 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.516 00:25:54.516 filename0: (groupid=0, jobs=1): err= 0: pid=900186: Mon Jul 15 16:18:39 2024 00:25:54.516 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:25:54.516 slat (nsec): min=7228, max=41166, avg=9368.61, stdev=3191.68 00:25:54.516 clat (usec): min=40663, max=42586, avg=40997.91, stdev=183.64 00:25:54.516 lat (usec): min=40670, max=42622, avg=41007.28, stdev=184.00 00:25:54.516 clat percentiles (usec): 00:25:54.516 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:25:54.516 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:54.516 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:54.516 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:25:54.516 | 99.99th=[42730] 00:25:54.516 bw ( KiB/s): min= 384, max= 416, per=33.76%, avg=388.80, stdev=11.72, samples=20 00:25:54.516 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:25:54.516 lat (msec) : 50=100.00% 00:25:54.516 cpu : usr=94.32%, sys=5.39%, ctx=16, majf=0, minf=176 00:25:54.516 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:54.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.516 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.516 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:54.516 filename1: (groupid=0, jobs=1): err= 0: pid=900187: Mon Jul 15 16:18:39 2024 00:25:54.516 read: IOPS=189, BW=760KiB/s (778kB/s)(7616KiB/10024msec) 00:25:54.516 slat (nsec): min=7177, max=75238, avg=9534.82, stdev=3909.02 00:25:54.516 clat (usec): min=551, max=42613, avg=21028.56, stdev=20370.00 00:25:54.516 lat (usec): min=560, max=42633, avg=21038.10, stdev=20369.78 00:25:54.516 clat percentiles (usec): 00:25:54.516 | 1.00th=[ 578], 5.00th=[ 603], 10.00th=[ 619], 20.00th=[ 652], 00:25:54.516 | 30.00th=[ 668], 40.00th=[ 693], 50.00th=[ 1106], 60.00th=[41157], 00:25:54.516 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:25:54.516 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:25:54.516 | 99.99th=[42730] 00:25:54.516 bw ( KiB/s): min= 672, max= 832, per=66.13%, avg=760.00, stdev=37.25, samples=20 00:25:54.516 iops : min= 168, max= 208, avg=190.00, stdev= 9.31, samples=20 00:25:54.516 lat (usec) : 750=46.80%, 1000=2.73% 00:25:54.516 lat (msec) : 2=0.47%, 50=50.00% 00:25:54.516 cpu : usr=94.79%, sys=4.91%, ctx=13, majf=0, minf=171 00:25:54.516 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:54.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.516 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.516 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:54.516 00:25:54.516 Run status group 0 (all jobs): 00:25:54.516 READ: bw=1149KiB/s (1177kB/s), 390KiB/s-760KiB/s (399kB/s-778kB/s), io=11.2MiB (11.8MB), run=10011-10024msec 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:54.516 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.517 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:54.517 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.517 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:54.517 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.517 00:25:54.517 real 0m11.346s 00:25:54.517 user 0m20.306s 00:25:54.517 sys 0m1.299s 00:25:54.517 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:54.517 16:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:54.517 ************************************ 00:25:54.517 END TEST fio_dif_1_multi_subsystems 00:25:54.517 ************************************ 00:25:54.517 16:18:40 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:25:54.517 16:18:40 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:54.517 16:18:40 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:54.517 16:18:40 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:54.517 16:18:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:54.517 ************************************ 00:25:54.517 START TEST fio_dif_rand_params 00:25:54.517 ************************************ 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.517 bdev_null0 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.517 [2024-07-15 16:18:40.380929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:54.517 { 00:25:54.517 "params": { 00:25:54.517 "name": "Nvme$subsystem", 00:25:54.517 "trtype": "$TEST_TRANSPORT", 00:25:54.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.517 "adrfam": "ipv4", 00:25:54.517 "trsvcid": "$NVMF_PORT", 00:25:54.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.517 "hdgst": ${hdgst:-false}, 00:25:54.517 "ddgst": ${ddgst:-false} 00:25:54.517 }, 00:25:54.517 "method": "bdev_nvme_attach_controller" 00:25:54.517 } 00:25:54.517 EOF 00:25:54.517 )") 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:54.517 "params": { 00:25:54.517 "name": "Nvme0", 00:25:54.517 "trtype": "tcp", 00:25:54.517 "traddr": "10.0.0.2", 00:25:54.517 "adrfam": "ipv4", 00:25:54.517 "trsvcid": "4420", 00:25:54.517 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:54.517 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:54.517 "hdgst": false, 00:25:54.517 "ddgst": false 00:25:54.517 }, 00:25:54.517 "method": "bdev_nvme_attach_controller" 00:25:54.517 }' 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:54.517 16:18:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:54.776 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:54.776 ... 00:25:54.776 fio-3.35 00:25:54.776 Starting 3 threads 00:25:54.776 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.333 00:26:01.333 filename0: (groupid=0, jobs=1): err= 0: pid=901583: Mon Jul 15 16:18:46 2024 00:26:01.333 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(134MiB/5004msec) 00:26:01.333 slat (usec): min=7, max=104, avg=14.35, stdev= 4.79 00:26:01.333 clat (usec): min=4680, max=53749, avg=13994.29, stdev=4727.74 00:26:01.333 lat (usec): min=4693, max=53762, avg=14008.65, stdev=4727.79 00:26:01.333 clat percentiles (usec): 00:26:01.333 | 1.00th=[ 7767], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[11469], 00:26:01.333 | 30.00th=[12518], 40.00th=[13566], 50.00th=[14091], 60.00th=[14484], 00:26:01.333 | 70.00th=[15008], 80.00th=[15401], 90.00th=[16188], 95.00th=[16909], 00:26:01.333 | 99.00th=[45876], 99.50th=[48497], 99.90th=[53740], 99.95th=[53740], 00:26:01.333 | 99.99th=[53740] 00:26:01.333 bw ( KiB/s): min=19968, max=32256, per=33.11%, avg=27366.40, stdev=3351.87, samples=10 00:26:01.333 iops : min= 156, max= 252, avg=213.80, stdev=26.19, samples=10 00:26:01.333 lat (msec) : 10=9.34%, 20=89.26%, 50=1.12%, 100=0.28% 00:26:01.333 cpu : usr=92.70%, sys=6.80%, ctx=17, majf=0, minf=153 00:26:01.333 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:01.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.333 issued rwts: total=1071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.333 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:01.333 filename0: (groupid=0, jobs=1): err= 0: pid=901584: Mon Jul 15 16:18:46 2024 00:26:01.333 read: IOPS=227, BW=28.5MiB/s (29.9MB/s)(143MiB/5014msec) 00:26:01.333 slat (nsec): min=7478, max=53558, avg=15424.61, stdev=5793.62 00:26:01.333 clat (usec): min=4241, max=56691, avg=13149.08, stdev=7713.07 00:26:01.333 lat (usec): min=4253, max=56698, avg=13164.50, stdev=7712.64 00:26:01.333 clat percentiles (usec): 00:26:01.333 | 1.00th=[ 7701], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10683], 00:26:01.333 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11600], 60.00th=[11863], 00:26:01.333 | 70.00th=[12256], 80.00th=[12911], 90.00th=[14222], 95.00th=[15533], 00:26:01.333 | 99.00th=[53740], 99.50th=[54789], 99.90th=[56361], 99.95th=[56886], 00:26:01.333 | 99.99th=[56886] 00:26:01.333 bw ( KiB/s): min=19200, max=33792, per=35.28%, avg=29158.40, stdev=4750.32, samples=10 00:26:01.333 iops : min= 150, max= 264, avg=227.80, stdev=37.11, samples=10 00:26:01.333 lat (msec) : 10=7.62%, 20=88.70%, 50=0.88%, 100=2.80% 00:26:01.333 cpu : usr=92.30%, sys=7.16%, ctx=30, majf=0, minf=123 00:26:01.333 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:01.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.333 issued rwts: total=1142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.333 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:01.333 filename0: (groupid=0, jobs=1): err= 0: pid=901585: Mon Jul 15 16:18:46 2024 00:26:01.333 read: IOPS=206, BW=25.9MiB/s (27.1MB/s)(131MiB/5044msec) 00:26:01.333 slat (nsec): min=7436, max=35186, avg=14179.76, stdev=3215.72 00:26:01.333 clat (usec): min=4382, max=49054, avg=14434.65, stdev=4930.84 00:26:01.333 lat (usec): min=4393, max=49066, avg=14448.83, stdev=4931.16 00:26:01.333 clat percentiles (usec): 00:26:01.333 | 1.00th=[ 5014], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[11994], 00:26:01.333 | 30.00th=[12911], 40.00th=[13960], 50.00th=[14615], 60.00th=[15270], 00:26:01.333 | 70.00th=[15664], 80.00th=[16188], 90.00th=[16712], 95.00th=[17695], 00:26:01.333 | 99.00th=[45876], 99.50th=[46400], 99.90th=[46924], 99.95th=[49021], 00:26:01.333 | 99.99th=[49021] 00:26:01.333 bw ( KiB/s): min=23808, max=30208, per=32.27%, avg=26675.20, stdev=2036.59, samples=10 00:26:01.333 iops : min= 186, max= 236, avg=208.40, stdev=15.91, samples=10 00:26:01.333 lat (msec) : 10=10.63%, 20=87.45%, 50=1.92% 00:26:01.333 cpu : usr=93.04%, sys=6.46%, ctx=13, majf=0, minf=118 00:26:01.333 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:01.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.333 issued rwts: total=1044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.333 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:01.333 00:26:01.333 Run status group 0 (all jobs): 00:26:01.333 READ: bw=80.7MiB/s (84.6MB/s), 25.9MiB/s-28.5MiB/s (27.1MB/s-29.9MB/s), io=407MiB (427MB), run=5004-5044msec 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.333 bdev_null0 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.333 [2024-07-15 16:18:46.421415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.333 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.334 bdev_null1 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.334 bdev_null2 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:01.334 { 00:26:01.334 "params": { 00:26:01.334 "name": "Nvme$subsystem", 00:26:01.334 "trtype": "$TEST_TRANSPORT", 00:26:01.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:01.334 "adrfam": "ipv4", 00:26:01.334 "trsvcid": "$NVMF_PORT", 00:26:01.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:01.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:01.334 "hdgst": ${hdgst:-false}, 00:26:01.334 "ddgst": ${ddgst:-false} 00:26:01.334 }, 00:26:01.334 "method": "bdev_nvme_attach_controller" 00:26:01.334 } 00:26:01.334 EOF 00:26:01.334 )") 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:01.334 { 00:26:01.334 "params": { 00:26:01.334 "name": "Nvme$subsystem", 00:26:01.334 "trtype": "$TEST_TRANSPORT", 00:26:01.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:01.334 "adrfam": "ipv4", 00:26:01.334 "trsvcid": "$NVMF_PORT", 00:26:01.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:01.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:01.334 "hdgst": ${hdgst:-false}, 00:26:01.334 "ddgst": ${ddgst:-false} 00:26:01.334 }, 00:26:01.334 "method": "bdev_nvme_attach_controller" 00:26:01.334 } 00:26:01.334 EOF 00:26:01.334 )") 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:01.334 { 00:26:01.334 "params": { 00:26:01.334 "name": "Nvme$subsystem", 00:26:01.334 "trtype": "$TEST_TRANSPORT", 00:26:01.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:01.334 "adrfam": "ipv4", 00:26:01.334 "trsvcid": "$NVMF_PORT", 00:26:01.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:01.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:01.334 "hdgst": ${hdgst:-false}, 00:26:01.334 "ddgst": ${ddgst:-false} 00:26:01.334 }, 00:26:01.334 "method": "bdev_nvme_attach_controller" 00:26:01.334 } 00:26:01.334 EOF 00:26:01.334 )") 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:01.334 "params": { 00:26:01.334 "name": "Nvme0", 00:26:01.334 "trtype": "tcp", 00:26:01.334 "traddr": "10.0.0.2", 00:26:01.334 "adrfam": "ipv4", 00:26:01.334 "trsvcid": "4420", 00:26:01.334 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:01.334 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:01.334 "hdgst": false, 00:26:01.334 "ddgst": false 00:26:01.334 }, 00:26:01.334 "method": "bdev_nvme_attach_controller" 00:26:01.334 },{ 00:26:01.334 "params": { 00:26:01.334 "name": "Nvme1", 00:26:01.334 "trtype": "tcp", 00:26:01.334 "traddr": "10.0.0.2", 00:26:01.334 "adrfam": "ipv4", 00:26:01.334 "trsvcid": "4420", 00:26:01.334 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:01.334 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:01.334 "hdgst": false, 00:26:01.334 "ddgst": false 00:26:01.334 }, 00:26:01.334 "method": "bdev_nvme_attach_controller" 00:26:01.334 },{ 00:26:01.334 "params": { 00:26:01.334 "name": "Nvme2", 00:26:01.334 "trtype": "tcp", 00:26:01.334 "traddr": "10.0.0.2", 00:26:01.334 "adrfam": "ipv4", 00:26:01.334 "trsvcid": "4420", 00:26:01.334 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:01.334 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:01.334 "hdgst": false, 00:26:01.334 "ddgst": false 00:26:01.334 }, 00:26:01.334 "method": "bdev_nvme_attach_controller" 00:26:01.334 }' 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:01.334 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:01.335 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:01.335 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:01.335 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:01.335 16:18:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:01.335 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:01.335 ... 00:26:01.335 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:01.335 ... 00:26:01.335 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:01.335 ... 00:26:01.335 fio-3.35 00:26:01.335 Starting 24 threads 00:26:01.335 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.542 00:26:13.542 filename0: (groupid=0, jobs=1): err= 0: pid=902445: Mon Jul 15 16:18:57 2024 00:26:13.542 read: IOPS=60, BW=241KiB/s (247kB/s)(2432KiB/10102msec) 00:26:13.542 slat (usec): min=4, max=137, avg=33.28, stdev=32.87 00:26:13.542 clat (msec): min=127, max=448, avg=265.35, stdev=61.82 00:26:13.542 lat (msec): min=127, max=448, avg=265.38, stdev=61.84 00:26:13.542 clat percentiles (msec): 00:26:13.542 | 1.00th=[ 128], 5.00th=[ 153], 10.00th=[ 203], 20.00th=[ 213], 00:26:13.542 | 30.00th=[ 239], 40.00th=[ 262], 50.00th=[ 275], 60.00th=[ 279], 00:26:13.542 | 70.00th=[ 284], 80.00th=[ 292], 90.00th=[ 351], 95.00th=[ 384], 00:26:13.542 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 447], 99.95th=[ 447], 00:26:13.542 | 99.99th=[ 447] 00:26:13.542 bw ( KiB/s): min= 128, max= 384, per=4.39%, avg=236.80, stdev=61.11, samples=20 00:26:13.542 iops : min= 32, max= 96, avg=59.20, stdev=15.28, samples=20 00:26:13.542 lat (msec) : 250=36.51%, 500=63.49% 00:26:13.542 cpu : usr=98.32%, sys=1.25%, ctx=31, majf=0, minf=48 00:26:13.542 IO depths : 1=2.5%, 2=8.1%, 4=23.0%, 8=56.4%, 16=10.0%, 32=0.0%, >=64=0.0% 00:26:13.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.542 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.542 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.542 filename0: (groupid=0, jobs=1): err= 0: pid=902446: Mon Jul 15 16:18:57 2024 00:26:13.542 read: IOPS=63, BW=256KiB/s (262kB/s)(2584KiB/10103msec) 00:26:13.542 slat (usec): min=4, max=110, avg=24.49, stdev=25.95 00:26:13.542 clat (msec): min=129, max=398, avg=249.76, stdev=45.63 00:26:13.542 lat (msec): min=129, max=398, avg=249.79, stdev=45.63 00:26:13.542 clat percentiles (msec): 00:26:13.542 | 1.00th=[ 130], 5.00th=[ 153], 10.00th=[ 203], 20.00th=[ 226], 00:26:13.542 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 259], 00:26:13.542 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 292], 95.00th=[ 300], 00:26:13.542 | 99.00th=[ 372], 99.50th=[ 372], 99.90th=[ 397], 99.95th=[ 397], 00:26:13.542 | 99.99th=[ 397] 00:26:13.542 bw ( KiB/s): min= 128, max= 384, per=4.69%, avg=252.00, stdev=45.22, samples=20 00:26:13.542 iops : min= 32, max= 96, avg=63.00, stdev=11.30, samples=20 00:26:13.542 lat (msec) : 250=53.25%, 500=46.75% 00:26:13.542 cpu : usr=98.41%, sys=1.15%, ctx=26, majf=0, minf=46 00:26:13.542 IO depths : 1=1.4%, 2=4.8%, 4=16.4%, 8=66.3%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:13.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.542 complete : 0=0.0%, 4=91.7%, 8=2.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.542 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.542 filename0: (groupid=0, jobs=1): err= 0: pid=902447: Mon Jul 15 16:18:57 2024 00:26:13.542 read: IOPS=46, BW=184KiB/s (189kB/s)(1856KiB/10066msec) 00:26:13.542 slat (usec): min=8, max=111, avg=49.30, stdev=29.12 00:26:13.543 clat (msec): min=235, max=400, avg=346.65, stdev=52.62 00:26:13.543 lat (msec): min=235, max=400, avg=346.70, stdev=52.63 00:26:13.543 clat percentiles (msec): 00:26:13.543 | 1.00th=[ 236], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 296], 00:26:13.543 | 30.00th=[ 321], 40.00th=[ 351], 50.00th=[ 372], 60.00th=[ 380], 00:26:13.543 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 401], 95.00th=[ 401], 00:26:13.543 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:26:13.543 | 99.99th=[ 401] 00:26:13.543 bw ( KiB/s): min= 127, max= 256, per=3.33%, avg=179.15, stdev=64.38, samples=20 00:26:13.543 iops : min= 31, max= 64, avg=44.75, stdev=16.13, samples=20 00:26:13.543 lat (msec) : 250=10.34%, 500=89.66% 00:26:13.543 cpu : usr=98.38%, sys=1.15%, ctx=31, majf=0, minf=46 00:26:13.543 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:13.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.543 filename0: (groupid=0, jobs=1): err= 0: pid=902448: Mon Jul 15 16:18:57 2024 00:26:13.543 read: IOPS=66, BW=266KiB/s (273kB/s)(2688KiB/10087msec) 00:26:13.543 slat (nsec): min=8162, max=71490, avg=14339.04, stdev=9660.67 00:26:13.543 clat (msec): min=124, max=292, avg=238.64, stdev=42.23 00:26:13.543 lat (msec): min=124, max=292, avg=238.65, stdev=42.23 00:26:13.543 clat percentiles (msec): 00:26:13.543 | 1.00th=[ 125], 5.00th=[ 171], 10.00th=[ 186], 20.00th=[ 201], 00:26:13.543 | 30.00th=[ 211], 40.00th=[ 224], 50.00th=[ 243], 60.00th=[ 271], 00:26:13.543 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 288], 95.00th=[ 292], 00:26:13.543 | 99.00th=[ 292], 99.50th=[ 292], 99.90th=[ 292], 99.95th=[ 292], 00:26:13.543 | 99.99th=[ 292] 00:26:13.543 bw ( KiB/s): min= 144, max= 384, per=4.88%, avg=262.40, stdev=59.05, samples=20 00:26:13.543 iops : min= 36, max= 96, avg=65.60, stdev=14.76, samples=20 00:26:13.543 lat (msec) : 250=54.17%, 500=45.83% 00:26:13.543 cpu : usr=98.56%, sys=1.05%, ctx=35, majf=0, minf=31 00:26:13.543 IO depths : 1=0.7%, 2=7.0%, 4=25.0%, 8=55.5%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:13.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.543 filename0: (groupid=0, jobs=1): err= 0: pid=902449: Mon Jul 15 16:18:57 2024 00:26:13.543 read: IOPS=63, BW=254KiB/s (260kB/s)(2560KiB/10077msec) 00:26:13.543 slat (usec): min=8, max=105, avg=19.50, stdev=19.90 00:26:13.543 clat (msec): min=129, max=359, avg=250.30, stdev=39.78 00:26:13.543 lat (msec): min=129, max=359, avg=250.32, stdev=39.78 00:26:13.543 clat percentiles (msec): 00:26:13.543 | 1.00th=[ 130], 5.00th=[ 190], 10.00th=[ 199], 20.00th=[ 209], 00:26:13.543 | 30.00th=[ 226], 40.00th=[ 245], 50.00th=[ 259], 60.00th=[ 271], 00:26:13.543 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 292], 95.00th=[ 292], 00:26:13.543 | 99.00th=[ 334], 99.50th=[ 334], 99.90th=[ 359], 99.95th=[ 359], 00:26:13.543 | 99.99th=[ 359] 00:26:13.543 bw ( KiB/s): min= 144, max= 256, per=4.64%, avg=249.60, stdev=25.11, samples=20 00:26:13.543 iops : min= 36, max= 64, avg=62.40, stdev= 6.28, samples=20 00:26:13.543 lat (msec) : 250=47.34%, 500=52.66% 00:26:13.543 cpu : usr=98.41%, sys=1.16%, ctx=18, majf=0, minf=52 00:26:13.543 IO depths : 1=1.1%, 2=7.3%, 4=25.0%, 8=55.2%, 16=11.4%, 32=0.0%, >=64=0.0% 00:26:13.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.543 filename0: (groupid=0, jobs=1): err= 0: pid=902450: Mon Jul 15 16:18:57 2024 00:26:13.543 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10089msec) 00:26:13.543 slat (usec): min=8, max=105, avg=20.63, stdev=23.04 00:26:13.543 clat (msec): min=146, max=421, avg=258.25, stdev=39.87 00:26:13.543 lat (msec): min=146, max=422, avg=258.27, stdev=39.87 00:26:13.543 clat percentiles (msec): 00:26:13.543 | 1.00th=[ 167], 5.00th=[ 203], 10.00th=[ 209], 20.00th=[ 222], 00:26:13.543 | 30.00th=[ 236], 40.00th=[ 251], 50.00th=[ 271], 60.00th=[ 275], 00:26:13.543 | 70.00th=[ 279], 80.00th=[ 284], 90.00th=[ 300], 95.00th=[ 342], 00:26:13.543 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 422], 99.95th=[ 422], 00:26:13.543 | 99.99th=[ 422] 00:26:13.543 bw ( KiB/s): min= 128, max= 384, per=4.52%, avg=243.20, stdev=55.57, samples=20 00:26:13.543 iops : min= 32, max= 96, avg=60.80, stdev=13.89, samples=20 00:26:13.543 lat (msec) : 250=40.06%, 500=59.94% 00:26:13.543 cpu : usr=98.44%, sys=1.12%, ctx=16, majf=0, minf=34 00:26:13.543 IO depths : 1=2.6%, 2=8.8%, 4=25.0%, 8=53.7%, 16=9.9%, 32=0.0%, >=64=0.0% 00:26:13.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.543 filename0: (groupid=0, jobs=1): err= 0: pid=902451: Mon Jul 15 16:18:57 2024 00:26:13.543 read: IOPS=46, BW=184KiB/s (189kB/s)(1856KiB/10066msec) 00:26:13.543 slat (usec): min=8, max=103, avg=28.81, stdev=12.20 00:26:13.543 clat (msec): min=161, max=510, avg=346.82, stdev=64.73 00:26:13.543 lat (msec): min=161, max=510, avg=346.85, stdev=64.74 00:26:13.543 clat percentiles (msec): 00:26:13.543 | 1.00th=[ 167], 5.00th=[ 205], 10.00th=[ 249], 20.00th=[ 300], 00:26:13.543 | 30.00th=[ 338], 40.00th=[ 368], 50.00th=[ 376], 60.00th=[ 380], 00:26:13.543 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 401], 95.00th=[ 401], 00:26:13.543 | 99.00th=[ 485], 99.50th=[ 493], 99.90th=[ 510], 99.95th=[ 510], 00:26:13.543 | 99.99th=[ 510] 00:26:13.543 bw ( KiB/s): min= 127, max= 256, per=3.33%, avg=179.15, stdev=64.38, samples=20 00:26:13.543 iops : min= 31, max= 64, avg=44.75, stdev=16.13, samples=20 00:26:13.543 lat (msec) : 250=11.21%, 500=88.36%, 750=0.43% 00:26:13.543 cpu : usr=98.13%, sys=1.25%, ctx=105, majf=0, minf=41 00:26:13.543 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:26:13.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.543 filename0: (groupid=0, jobs=1): err= 0: pid=902452: Mon Jul 15 16:18:57 2024 00:26:13.543 read: IOPS=46, BW=184KiB/s (189kB/s)(1856KiB/10065msec) 00:26:13.543 slat (nsec): min=8621, max=84357, avg=34223.71, stdev=14602.12 00:26:13.543 clat (msec): min=185, max=495, avg=346.75, stdev=62.01 00:26:13.543 lat (msec): min=185, max=495, avg=346.78, stdev=62.00 00:26:13.543 clat percentiles (msec): 00:26:13.543 | 1.00th=[ 205], 5.00th=[ 213], 10.00th=[ 241], 20.00th=[ 288], 00:26:13.543 | 30.00th=[ 338], 40.00th=[ 368], 50.00th=[ 376], 60.00th=[ 380], 00:26:13.543 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 401], 95.00th=[ 401], 00:26:13.543 | 99.00th=[ 489], 99.50th=[ 489], 99.90th=[ 498], 99.95th=[ 498], 00:26:13.543 | 99.99th=[ 498] 00:26:13.543 bw ( KiB/s): min= 128, max= 256, per=3.33%, avg=179.20, stdev=61.33, samples=20 00:26:13.543 iops : min= 32, max= 64, avg=44.80, stdev=15.33, samples=20 00:26:13.543 lat (msec) : 250=10.78%, 500=89.22% 00:26:13.543 cpu : usr=98.35%, sys=1.20%, ctx=26, majf=0, minf=35 00:26:13.543 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:26:13.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.543 filename1: (groupid=0, jobs=1): err= 0: pid=902453: Mon Jul 15 16:18:57 2024 00:26:13.543 read: IOPS=71, BW=286KiB/s (293kB/s)(2888KiB/10094msec) 00:26:13.543 slat (usec): min=4, max=246, avg=27.76, stdev=30.16 00:26:13.543 clat (msec): min=2, max=467, avg=222.40, stdev=82.76 00:26:13.543 lat (msec): min=2, max=467, avg=222.43, stdev=82.76 00:26:13.543 clat percentiles (msec): 00:26:13.543 | 1.00th=[ 3], 5.00th=[ 12], 10.00th=[ 72], 20.00th=[ 205], 00:26:13.543 | 30.00th=[ 220], 40.00th=[ 230], 50.00th=[ 243], 60.00th=[ 253], 00:26:13.543 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 284], 95.00th=[ 288], 00:26:13.543 | 99.00th=[ 435], 99.50th=[ 468], 99.90th=[ 468], 99.95th=[ 468], 00:26:13.543 | 99.99th=[ 468] 00:26:13.543 bw ( KiB/s): min= 160, max= 896, per=5.25%, avg=282.40, stdev=152.26, samples=20 00:26:13.543 iops : min= 40, max= 224, avg=70.60, stdev=38.06, samples=20 00:26:13.543 lat (msec) : 4=4.43%, 20=2.22%, 50=2.22%, 100=2.22%, 250=44.88% 00:26:13.543 lat (msec) : 500=44.04% 00:26:13.543 cpu : usr=98.20%, sys=1.34%, ctx=37, majf=0, minf=79 00:26:13.543 IO depths : 1=0.6%, 2=1.7%, 4=13.2%, 8=72.4%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:13.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 complete : 0=0.0%, 4=91.6%, 8=3.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 issued rwts: total=722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.543 filename1: (groupid=0, jobs=1): err= 0: pid=902454: Mon Jul 15 16:18:57 2024 00:26:13.543 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10100msec) 00:26:13.543 slat (nsec): min=5698, max=94922, avg=17600.05, stdev=18708.79 00:26:13.543 clat (msec): min=126, max=355, avg=245.88, stdev=40.21 00:26:13.543 lat (msec): min=126, max=355, avg=245.89, stdev=40.22 00:26:13.543 clat percentiles (msec): 00:26:13.543 | 1.00th=[ 127], 5.00th=[ 184], 10.00th=[ 203], 20.00th=[ 213], 00:26:13.543 | 30.00th=[ 224], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 264], 00:26:13.543 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 284], 95.00th=[ 292], 00:26:13.543 | 99.00th=[ 355], 99.50th=[ 355], 99.90th=[ 355], 99.95th=[ 355], 00:26:13.543 | 99.99th=[ 355] 00:26:13.543 bw ( KiB/s): min= 144, max= 368, per=4.75%, avg=256.00, stdev=36.34, samples=20 00:26:13.543 iops : min= 36, max= 92, avg=64.00, stdev= 9.08, samples=20 00:26:13.543 lat (msec) : 250=53.66%, 500=46.34% 00:26:13.543 cpu : usr=97.96%, sys=1.53%, ctx=28, majf=0, minf=43 00:26:13.543 IO depths : 1=1.4%, 2=6.4%, 4=21.3%, 8=59.8%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:13.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.543 filename1: (groupid=0, jobs=1): err= 0: pid=902455: Mon Jul 15 16:18:57 2024 00:26:13.543 read: IOPS=46, BW=184KiB/s (189kB/s)(1856KiB/10077msec) 00:26:13.543 slat (nsec): min=8374, max=96733, avg=33533.90, stdev=17177.46 00:26:13.543 clat (msec): min=190, max=510, avg=346.88, stdev=63.03 00:26:13.543 lat (msec): min=190, max=510, avg=346.91, stdev=63.04 00:26:13.543 clat percentiles (msec): 00:26:13.543 | 1.00th=[ 190], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 279], 00:26:13.543 | 30.00th=[ 330], 40.00th=[ 342], 50.00th=[ 368], 60.00th=[ 380], 00:26:13.543 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 401], 95.00th=[ 401], 00:26:13.543 | 99.00th=[ 498], 99.50th=[ 498], 99.90th=[ 510], 99.95th=[ 510], 00:26:13.543 | 99.99th=[ 510] 00:26:13.543 bw ( KiB/s): min= 127, max= 256, per=3.33%, avg=179.15, stdev=57.99, samples=20 00:26:13.543 iops : min= 31, max= 64, avg=44.75, stdev=14.53, samples=20 00:26:13.543 lat (msec) : 250=10.78%, 500=88.79%, 750=0.43% 00:26:13.543 cpu : usr=97.74%, sys=1.53%, ctx=86, majf=0, minf=40 00:26:13.543 IO depths : 1=3.7%, 2=9.7%, 4=24.4%, 8=53.4%, 16=8.8%, 32=0.0%, >=64=0.0% 00:26:13.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.543 filename1: (groupid=0, jobs=1): err= 0: pid=902456: Mon Jul 15 16:18:57 2024 00:26:13.543 read: IOPS=52, BW=211KiB/s (216kB/s)(2112KiB/10011msec) 00:26:13.543 slat (usec): min=8, max=140, avg=53.36, stdev=35.85 00:26:13.543 clat (msec): min=114, max=485, avg=302.95, stdev=80.42 00:26:13.543 lat (msec): min=114, max=485, avg=303.00, stdev=80.45 00:26:13.543 clat percentiles (msec): 00:26:13.543 | 1.00th=[ 115], 5.00th=[ 153], 10.00th=[ 207], 20.00th=[ 224], 00:26:13.543 | 30.00th=[ 262], 40.00th=[ 279], 50.00th=[ 292], 60.00th=[ 347], 00:26:13.543 | 70.00th=[ 376], 80.00th=[ 384], 90.00th=[ 388], 95.00th=[ 397], 00:26:13.543 | 99.00th=[ 477], 99.50th=[ 481], 99.90th=[ 485], 99.95th=[ 485], 00:26:13.543 | 99.99th=[ 485] 00:26:13.543 bw ( KiB/s): min= 128, max= 368, per=3.80%, avg=204.80, stdev=72.79, samples=20 00:26:13.543 iops : min= 32, max= 92, avg=51.20, stdev=18.20, samples=20 00:26:13.543 lat (msec) : 250=23.86%, 500=76.14% 00:26:13.543 cpu : usr=98.31%, sys=1.21%, ctx=29, majf=0, minf=40 00:26:13.543 IO depths : 1=2.1%, 2=8.3%, 4=25.0%, 8=54.2%, 16=10.4%, 32=0.0%, >=64=0.0% 00:26:13.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.543 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.543 filename1: (groupid=0, jobs=1): err= 0: pid=902457: Mon Jul 15 16:18:57 2024 00:26:13.543 read: IOPS=56, BW=228KiB/s (233kB/s)(2296KiB/10089msec) 00:26:13.543 slat (usec): min=8, max=106, avg=30.31, stdev=30.39 00:26:13.543 clat (msec): min=132, max=423, avg=280.58, stdev=59.60 00:26:13.543 lat (msec): min=132, max=423, avg=280.61, stdev=59.61 00:26:13.543 clat percentiles (msec): 00:26:13.543 | 1.00th=[ 133], 5.00th=[ 207], 10.00th=[ 218], 20.00th=[ 241], 00:26:13.543 | 30.00th=[ 245], 40.00th=[ 257], 50.00th=[ 275], 60.00th=[ 284], 00:26:13.543 | 70.00th=[ 292], 80.00th=[ 351], 90.00th=[ 372], 95.00th=[ 388], 00:26:13.543 | 99.00th=[ 393], 99.50th=[ 393], 99.90th=[ 422], 99.95th=[ 422], 00:26:13.543 | 99.99th=[ 422] 00:26:13.543 bw ( KiB/s): min= 128, max= 384, per=4.15%, avg=223.20, stdev=65.55, samples=20 00:26:13.543 iops : min= 32, max= 96, avg=55.80, stdev=16.39, samples=20 00:26:13.543 lat (msec) : 250=36.93%, 500=63.07% 00:26:13.543 cpu : usr=98.24%, sys=1.29%, ctx=16, majf=0, minf=28 00:26:13.543 IO depths : 1=2.1%, 2=5.2%, 4=15.3%, 8=66.7%, 16=10.6%, 32=0.0%, >=64=0.0% 00:26:13.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 complete : 0=0.0%, 4=91.2%, 8=3.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.544 filename1: (groupid=0, jobs=1): err= 0: pid=902458: Mon Jul 15 16:18:57 2024 00:26:13.544 read: IOPS=48, BW=193KiB/s (197kB/s)(1944KiB/10082msec) 00:26:13.544 slat (nsec): min=6528, max=94160, avg=26989.52, stdev=11107.27 00:26:13.544 clat (msec): min=166, max=510, avg=331.40, stdev=77.71 00:26:13.544 lat (msec): min=166, max=510, avg=331.43, stdev=77.71 00:26:13.544 clat percentiles (msec): 00:26:13.544 | 1.00th=[ 171], 5.00th=[ 197], 10.00th=[ 213], 20.00th=[ 241], 00:26:13.544 | 30.00th=[ 275], 40.00th=[ 338], 50.00th=[ 368], 60.00th=[ 380], 00:26:13.544 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 401], 95.00th=[ 401], 00:26:13.544 | 99.00th=[ 498], 99.50th=[ 502], 99.90th=[ 510], 99.95th=[ 510], 00:26:13.544 | 99.99th=[ 510] 00:26:13.544 bw ( KiB/s): min= 128, max= 336, per=3.50%, avg=188.00, stdev=69.81, samples=20 00:26:13.544 iops : min= 32, max= 84, avg=47.00, stdev=17.45, samples=20 00:26:13.544 lat (msec) : 250=21.40%, 500=78.19%, 750=0.41% 00:26:13.544 cpu : usr=98.07%, sys=1.38%, ctx=22, majf=0, minf=75 00:26:13.544 IO depths : 1=2.9%, 2=8.0%, 4=21.6%, 8=57.8%, 16=9.7%, 32=0.0%, >=64=0.0% 00:26:13.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 complete : 0=0.0%, 4=93.3%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 issued rwts: total=486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.544 filename1: (groupid=0, jobs=1): err= 0: pid=902459: Mon Jul 15 16:18:57 2024 00:26:13.544 read: IOPS=46, BW=184KiB/s (189kB/s)(1856KiB/10065msec) 00:26:13.544 slat (nsec): min=8219, max=68205, avg=20365.27, stdev=10606.24 00:26:13.544 clat (msec): min=200, max=570, avg=346.89, stdev=66.42 00:26:13.544 lat (msec): min=200, max=570, avg=346.91, stdev=66.41 00:26:13.544 clat percentiles (msec): 00:26:13.544 | 1.00th=[ 201], 5.00th=[ 222], 10.00th=[ 249], 20.00th=[ 279], 00:26:13.544 | 30.00th=[ 317], 40.00th=[ 338], 50.00th=[ 376], 60.00th=[ 380], 00:26:13.544 | 70.00th=[ 388], 80.00th=[ 388], 90.00th=[ 401], 95.00th=[ 401], 00:26:13.544 | 99.00th=[ 567], 99.50th=[ 567], 99.90th=[ 575], 99.95th=[ 575], 00:26:13.544 | 99.99th=[ 575] 00:26:13.544 bw ( KiB/s): min= 128, max= 256, per=3.33%, avg=179.20, stdev=58.18, samples=20 00:26:13.544 iops : min= 32, max= 64, avg=44.80, stdev=14.54, samples=20 00:26:13.544 lat (msec) : 250=12.50%, 500=85.78%, 750=1.72% 00:26:13.544 cpu : usr=98.42%, sys=1.14%, ctx=19, majf=0, minf=54 00:26:13.544 IO depths : 1=3.0%, 2=9.3%, 4=25.0%, 8=53.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:26:13.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.544 filename1: (groupid=0, jobs=1): err= 0: pid=902460: Mon Jul 15 16:18:57 2024 00:26:13.544 read: IOPS=64, BW=257KiB/s (263kB/s)(2592KiB/10089msec) 00:26:13.544 slat (nsec): min=7475, max=52724, avg=12501.98, stdev=7404.80 00:26:13.544 clat (msec): min=149, max=402, avg=248.64, stdev=38.68 00:26:13.544 lat (msec): min=149, max=402, avg=248.65, stdev=38.67 00:26:13.544 clat percentiles (msec): 00:26:13.544 | 1.00th=[ 150], 5.00th=[ 186], 10.00th=[ 207], 20.00th=[ 226], 00:26:13.544 | 30.00th=[ 236], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:26:13.544 | 70.00th=[ 262], 80.00th=[ 279], 90.00th=[ 284], 95.00th=[ 300], 00:26:13.544 | 99.00th=[ 380], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:26:13.544 | 99.99th=[ 405] 00:26:13.544 bw ( KiB/s): min= 128, max= 368, per=4.69%, avg=252.80, stdev=46.02, samples=20 00:26:13.544 iops : min= 32, max= 92, avg=63.20, stdev=11.51, samples=20 00:26:13.544 lat (msec) : 250=54.94%, 500=45.06% 00:26:13.544 cpu : usr=98.30%, sys=1.08%, ctx=55, majf=0, minf=58 00:26:13.544 IO depths : 1=0.6%, 2=2.2%, 4=10.6%, 8=74.5%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:13.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 complete : 0=0.0%, 4=90.0%, 8=4.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 issued rwts: total=648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.544 filename2: (groupid=0, jobs=1): err= 0: pid=902461: Mon Jul 15 16:18:57 2024 00:26:13.544 read: IOPS=45, BW=183KiB/s (188kB/s)(1848KiB/10073msec) 00:26:13.544 slat (nsec): min=8219, max=94210, avg=23602.03, stdev=22398.08 00:26:13.544 clat (msec): min=77, max=500, avg=348.57, stdev=68.58 00:26:13.544 lat (msec): min=77, max=500, avg=348.59, stdev=68.57 00:26:13.544 clat percentiles (msec): 00:26:13.544 | 1.00th=[ 78], 5.00th=[ 245], 10.00th=[ 271], 20.00th=[ 309], 00:26:13.544 | 30.00th=[ 334], 40.00th=[ 372], 50.00th=[ 380], 60.00th=[ 384], 00:26:13.544 | 70.00th=[ 388], 80.00th=[ 388], 90.00th=[ 397], 95.00th=[ 401], 00:26:13.544 | 99.00th=[ 485], 99.50th=[ 485], 99.90th=[ 502], 99.95th=[ 502], 00:26:13.544 | 99.99th=[ 502] 00:26:13.544 bw ( KiB/s): min= 128, max= 256, per=3.31%, avg=178.40, stdev=60.38, samples=20 00:26:13.544 iops : min= 32, max= 64, avg=44.60, stdev=15.09, samples=20 00:26:13.544 lat (msec) : 100=3.03%, 250=6.93%, 500=89.61%, 750=0.43% 00:26:13.544 cpu : usr=98.41%, sys=1.19%, ctx=17, majf=0, minf=41 00:26:13.544 IO depths : 1=4.8%, 2=11.0%, 4=25.1%, 8=51.5%, 16=7.6%, 32=0.0%, >=64=0.0% 00:26:13.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 issued rwts: total=462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.544 filename2: (groupid=0, jobs=1): err= 0: pid=902462: Mon Jul 15 16:18:57 2024 00:26:13.544 read: IOPS=66, BW=264KiB/s (270kB/s)(2664KiB/10089msec) 00:26:13.544 slat (nsec): min=8072, max=51703, avg=13117.53, stdev=5818.62 00:26:13.544 clat (msec): min=125, max=393, avg=241.66, stdev=53.00 00:26:13.544 lat (msec): min=125, max=393, avg=241.68, stdev=53.00 00:26:13.544 clat percentiles (msec): 00:26:13.544 | 1.00th=[ 126], 5.00th=[ 157], 10.00th=[ 167], 20.00th=[ 192], 00:26:13.544 | 30.00th=[ 226], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:26:13.544 | 70.00th=[ 266], 80.00th=[ 279], 90.00th=[ 292], 95.00th=[ 330], 00:26:13.544 | 99.00th=[ 388], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:26:13.544 | 99.99th=[ 393] 00:26:13.544 bw ( KiB/s): min= 176, max= 384, per=4.82%, avg=260.00, stdev=49.21, samples=20 00:26:13.544 iops : min= 44, max= 96, avg=65.00, stdev=12.30, samples=20 00:26:13.544 lat (msec) : 250=60.06%, 500=39.94% 00:26:13.544 cpu : usr=97.99%, sys=1.53%, ctx=39, majf=0, minf=51 00:26:13.544 IO depths : 1=0.6%, 2=1.7%, 4=9.0%, 8=76.6%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:13.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 complete : 0=0.0%, 4=89.4%, 8=5.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 issued rwts: total=666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.544 filename2: (groupid=0, jobs=1): err= 0: pid=902463: Mon Jul 15 16:18:57 2024 00:26:13.544 read: IOPS=46, BW=184KiB/s (189kB/s)(1856KiB/10062msec) 00:26:13.544 slat (nsec): min=8501, max=99386, avg=42081.38, stdev=22434.73 00:26:13.544 clat (msec): min=208, max=511, avg=346.61, stdev=61.09 00:26:13.544 lat (msec): min=208, max=511, avg=346.65, stdev=61.08 00:26:13.544 clat percentiles (msec): 00:26:13.544 | 1.00th=[ 236], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 279], 00:26:13.544 | 30.00th=[ 313], 40.00th=[ 338], 50.00th=[ 368], 60.00th=[ 380], 00:26:13.544 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 401], 95.00th=[ 401], 00:26:13.544 | 99.00th=[ 493], 99.50th=[ 498], 99.90th=[ 510], 99.95th=[ 510], 00:26:13.544 | 99.99th=[ 510] 00:26:13.544 bw ( KiB/s): min= 128, max= 256, per=3.33%, avg=179.20, stdev=59.78, samples=20 00:26:13.544 iops : min= 32, max= 64, avg=44.80, stdev=14.94, samples=20 00:26:13.544 lat (msec) : 250=11.21%, 500=88.36%, 750=0.43% 00:26:13.544 cpu : usr=97.90%, sys=1.47%, ctx=57, majf=0, minf=39 00:26:13.544 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:26:13.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.544 filename2: (groupid=0, jobs=1): err= 0: pid=902464: Mon Jul 15 16:18:57 2024 00:26:13.544 read: IOPS=46, BW=184KiB/s (189kB/s)(1856KiB/10072msec) 00:26:13.544 slat (usec): min=8, max=106, avg=29.09, stdev=27.32 00:26:13.544 clat (msec): min=176, max=464, avg=347.05, stdev=58.14 00:26:13.544 lat (msec): min=176, max=464, avg=347.08, stdev=58.13 00:26:13.544 clat percentiles (msec): 00:26:13.544 | 1.00th=[ 224], 5.00th=[ 232], 10.00th=[ 245], 20.00th=[ 292], 00:26:13.544 | 30.00th=[ 321], 40.00th=[ 368], 50.00th=[ 380], 60.00th=[ 384], 00:26:13.544 | 70.00th=[ 384], 80.00th=[ 388], 90.00th=[ 397], 95.00th=[ 401], 00:26:13.544 | 99.00th=[ 422], 99.50th=[ 451], 99.90th=[ 464], 99.95th=[ 464], 00:26:13.544 | 99.99th=[ 464] 00:26:13.544 bw ( KiB/s): min= 128, max= 256, per=3.33%, avg=179.20, stdev=61.33, samples=20 00:26:13.544 iops : min= 32, max= 64, avg=44.80, stdev=15.33, samples=20 00:26:13.544 lat (msec) : 250=14.22%, 500=85.78% 00:26:13.544 cpu : usr=98.48%, sys=1.08%, ctx=23, majf=0, minf=43 00:26:13.544 IO depths : 1=3.9%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:26:13.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.544 filename2: (groupid=0, jobs=1): err= 0: pid=902465: Mon Jul 15 16:18:57 2024 00:26:13.544 read: IOPS=66, BW=266KiB/s (273kB/s)(2688KiB/10087msec) 00:26:13.544 slat (nsec): min=8188, max=71533, avg=13548.09, stdev=8969.74 00:26:13.544 clat (msec): min=124, max=317, avg=238.63, stdev=42.84 00:26:13.544 lat (msec): min=124, max=317, avg=238.64, stdev=42.84 00:26:13.544 clat percentiles (msec): 00:26:13.544 | 1.00th=[ 125], 5.00th=[ 171], 10.00th=[ 186], 20.00th=[ 201], 00:26:13.544 | 30.00th=[ 211], 40.00th=[ 224], 50.00th=[ 245], 60.00th=[ 271], 00:26:13.544 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 288], 95.00th=[ 292], 00:26:13.544 | 99.00th=[ 292], 99.50th=[ 292], 99.90th=[ 317], 99.95th=[ 317], 00:26:13.544 | 99.99th=[ 317] 00:26:13.544 bw ( KiB/s): min= 144, max= 384, per=4.88%, avg=262.40, stdev=60.63, samples=20 00:26:13.544 iops : min= 36, max= 96, avg=65.60, stdev=15.16, samples=20 00:26:13.544 lat (msec) : 250=53.27%, 500=46.73% 00:26:13.544 cpu : usr=98.35%, sys=1.26%, ctx=19, majf=0, minf=90 00:26:13.544 IO depths : 1=1.9%, 2=8.2%, 4=25.0%, 8=54.3%, 16=10.6%, 32=0.0%, >=64=0.0% 00:26:13.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.544 filename2: (groupid=0, jobs=1): err= 0: pid=902466: Mon Jul 15 16:18:57 2024 00:26:13.544 read: IOPS=60, BW=241KiB/s (247kB/s)(2432KiB/10089msec) 00:26:13.544 slat (nsec): min=8196, max=97736, avg=22358.93, stdev=22955.50 00:26:13.544 clat (msec): min=135, max=435, avg=265.04, stdev=46.39 00:26:13.544 lat (msec): min=135, max=435, avg=265.06, stdev=46.40 00:26:13.544 clat percentiles (msec): 00:26:13.544 | 1.00th=[ 136], 5.00th=[ 205], 10.00th=[ 209], 20.00th=[ 220], 00:26:13.544 | 30.00th=[ 241], 40.00th=[ 264], 50.00th=[ 275], 60.00th=[ 279], 00:26:13.544 | 70.00th=[ 284], 80.00th=[ 292], 90.00th=[ 321], 95.00th=[ 342], 00:26:13.544 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 435], 99.95th=[ 435], 00:26:13.544 | 99.99th=[ 435] 00:26:13.544 bw ( KiB/s): min= 128, max= 256, per=4.39%, avg=236.80, stdev=44.84, samples=20 00:26:13.544 iops : min= 32, max= 64, avg=59.20, stdev=11.21, samples=20 00:26:13.544 lat (msec) : 250=37.17%, 500=62.83% 00:26:13.544 cpu : usr=98.29%, sys=1.17%, ctx=39, majf=0, minf=39 00:26:13.544 IO depths : 1=1.8%, 2=7.9%, 4=24.5%, 8=55.1%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:13.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 complete : 0=0.0%, 4=94.1%, 8=0.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.544 filename2: (groupid=0, jobs=1): err= 0: pid=902467: Mon Jul 15 16:18:57 2024 00:26:13.544 read: IOPS=63, BW=254KiB/s (260kB/s)(2560KiB/10077msec) 00:26:13.544 slat (usec): min=8, max=104, avg=18.17, stdev=18.57 00:26:13.544 clat (msec): min=153, max=349, avg=250.33, stdev=32.58 00:26:13.544 lat (msec): min=153, max=349, avg=250.35, stdev=32.58 00:26:13.544 clat percentiles (msec): 00:26:13.544 | 1.00th=[ 190], 5.00th=[ 199], 10.00th=[ 205], 20.00th=[ 211], 00:26:13.544 | 30.00th=[ 226], 40.00th=[ 243], 50.00th=[ 262], 60.00th=[ 271], 00:26:13.544 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 288], 95.00th=[ 292], 00:26:13.544 | 99.00th=[ 292], 99.50th=[ 300], 99.90th=[ 351], 99.95th=[ 351], 00:26:13.544 | 99.99th=[ 351] 00:26:13.544 bw ( KiB/s): min= 144, max= 272, per=4.64%, avg=249.60, stdev=25.64, samples=20 00:26:13.544 iops : min= 36, max= 68, avg=62.40, stdev= 6.41, samples=20 00:26:13.544 lat (msec) : 250=45.94%, 500=54.06% 00:26:13.544 cpu : usr=98.40%, sys=1.16%, ctx=21, majf=0, minf=31 00:26:13.544 IO depths : 1=0.6%, 2=6.9%, 4=25.0%, 8=55.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:13.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.544 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.544 filename2: (groupid=0, jobs=1): err= 0: pid=902468: Mon Jul 15 16:18:57 2024 00:26:13.544 read: IOPS=46, BW=184KiB/s (189kB/s)(1856KiB/10072msec) 00:26:13.545 slat (nsec): min=9297, max=52586, avg=28055.22, stdev=8736.88 00:26:13.545 clat (msec): min=171, max=495, avg=347.06, stdev=65.44 00:26:13.545 lat (msec): min=171, max=495, avg=347.09, stdev=65.45 00:26:13.545 clat percentiles (msec): 00:26:13.545 | 1.00th=[ 171], 5.00th=[ 213], 10.00th=[ 241], 20.00th=[ 292], 00:26:13.545 | 30.00th=[ 342], 40.00th=[ 368], 50.00th=[ 376], 60.00th=[ 380], 00:26:13.545 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 401], 95.00th=[ 401], 00:26:13.545 | 99.00th=[ 481], 99.50th=[ 489], 99.90th=[ 498], 99.95th=[ 498], 00:26:13.545 | 99.99th=[ 498] 00:26:13.545 bw ( KiB/s): min= 128, max= 384, per=3.33%, avg=179.20, stdev=74.07, samples=20 00:26:13.545 iops : min= 32, max= 96, avg=44.80, stdev=18.52, samples=20 00:26:13.545 lat (msec) : 250=11.64%, 500=88.36% 00:26:13.545 cpu : usr=98.24%, sys=1.37%, ctx=24, majf=0, minf=41 00:26:13.545 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:26:13.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.545 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.545 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.545 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:13.545 00:26:13.545 Run status group 0 (all jobs): 00:26:13.545 READ: bw=5370KiB/s (5499kB/s), 183KiB/s-286KiB/s (188kB/s-293kB/s), io=53.0MiB (55.6MB), run=10011-10103msec 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.545 bdev_null0 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.545 [2024-07-15 16:18:58.180057] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.545 bdev_null1 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:13.545 { 00:26:13.545 "params": { 00:26:13.545 "name": "Nvme$subsystem", 00:26:13.545 "trtype": "$TEST_TRANSPORT", 00:26:13.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.545 "adrfam": "ipv4", 00:26:13.545 "trsvcid": "$NVMF_PORT", 00:26:13.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.545 "hdgst": ${hdgst:-false}, 00:26:13.545 "ddgst": ${ddgst:-false} 00:26:13.545 }, 00:26:13.545 "method": "bdev_nvme_attach_controller" 00:26:13.545 } 00:26:13.545 EOF 00:26:13.545 )") 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:13.545 { 00:26:13.545 "params": { 00:26:13.545 "name": "Nvme$subsystem", 00:26:13.545 "trtype": "$TEST_TRANSPORT", 00:26:13.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.545 "adrfam": "ipv4", 00:26:13.545 "trsvcid": "$NVMF_PORT", 00:26:13.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.545 "hdgst": ${hdgst:-false}, 00:26:13.545 "ddgst": ${ddgst:-false} 00:26:13.545 }, 00:26:13.545 "method": "bdev_nvme_attach_controller" 00:26:13.545 } 00:26:13.545 EOF 00:26:13.545 )") 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:13.545 "params": { 00:26:13.545 "name": "Nvme0", 00:26:13.545 "trtype": "tcp", 00:26:13.545 "traddr": "10.0.0.2", 00:26:13.545 "adrfam": "ipv4", 00:26:13.545 "trsvcid": "4420", 00:26:13.545 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:13.545 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:13.545 "hdgst": false, 00:26:13.545 "ddgst": false 00:26:13.545 }, 00:26:13.545 "method": "bdev_nvme_attach_controller" 00:26:13.545 },{ 00:26:13.545 "params": { 00:26:13.545 "name": "Nvme1", 00:26:13.545 "trtype": "tcp", 00:26:13.545 "traddr": "10.0.0.2", 00:26:13.545 "adrfam": "ipv4", 00:26:13.545 "trsvcid": "4420", 00:26:13.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:13.545 "hdgst": false, 00:26:13.545 "ddgst": false 00:26:13.545 }, 00:26:13.545 "method": "bdev_nvme_attach_controller" 00:26:13.545 }' 00:26:13.545 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:13.546 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:13.546 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.546 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:13.546 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:13.546 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:13.546 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:13.546 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:13.546 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:13.546 16:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.546 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:13.546 ... 00:26:13.546 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:13.546 ... 00:26:13.546 fio-3.35 00:26:13.546 Starting 4 threads 00:26:13.546 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.809 00:26:18.809 filename0: (groupid=0, jobs=1): err= 0: pid=903850: Mon Jul 15 16:19:04 2024 00:26:18.809 read: IOPS=1910, BW=14.9MiB/s (15.6MB/s)(74.6MiB/5001msec) 00:26:18.809 slat (nsec): min=3941, max=87372, avg=18987.29, stdev=8362.65 00:26:18.809 clat (usec): min=753, max=7993, avg=4116.80, stdev=514.00 00:26:18.809 lat (usec): min=773, max=8005, avg=4135.79, stdev=514.40 00:26:18.809 clat percentiles (usec): 00:26:18.809 | 1.00th=[ 2343], 5.00th=[ 3589], 10.00th=[ 3818], 20.00th=[ 3949], 00:26:18.809 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4146], 00:26:18.809 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4686], 00:26:18.809 | 99.00th=[ 6521], 99.50th=[ 6980], 99.90th=[ 7373], 99.95th=[ 7373], 00:26:18.809 | 99.99th=[ 7963] 00:26:18.809 bw ( KiB/s): min=15024, max=15728, per=25.00%, avg=15315.56, stdev=243.02, samples=9 00:26:18.809 iops : min= 1878, max= 1966, avg=1914.44, stdev=30.38, samples=9 00:26:18.809 lat (usec) : 1000=0.06% 00:26:18.809 lat (msec) : 2=0.63%, 4=27.96%, 10=71.35% 00:26:18.809 cpu : usr=92.58%, sys=5.50%, ctx=63, majf=0, minf=90 00:26:18.809 IO depths : 1=0.4%, 2=20.3%, 4=53.6%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:18.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.809 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.809 issued rwts: total=9553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.809 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:18.809 filename0: (groupid=0, jobs=1): err= 0: pid=903851: Mon Jul 15 16:19:04 2024 00:26:18.809 read: IOPS=1927, BW=15.1MiB/s (15.8MB/s)(75.3MiB/5003msec) 00:26:18.809 slat (nsec): min=3888, max=99868, avg=14076.54, stdev=7575.57 00:26:18.809 clat (usec): min=751, max=9199, avg=4105.69, stdev=429.89 00:26:18.809 lat (usec): min=764, max=9227, avg=4119.76, stdev=430.20 00:26:18.809 clat percentiles (usec): 00:26:18.809 | 1.00th=[ 2868], 5.00th=[ 3556], 10.00th=[ 3785], 20.00th=[ 3949], 00:26:18.809 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4146], 00:26:18.809 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4490], 00:26:18.809 | 99.00th=[ 5669], 99.50th=[ 6521], 99.90th=[ 7439], 99.95th=[ 8979], 00:26:18.809 | 99.99th=[ 9241] 00:26:18.809 bw ( KiB/s): min=14992, max=16080, per=25.16%, avg=15414.40, stdev=348.69, samples=10 00:26:18.809 iops : min= 1874, max= 2010, avg=1926.80, stdev=43.59, samples=10 00:26:18.809 lat (usec) : 1000=0.01% 00:26:18.809 lat (msec) : 2=0.27%, 4=26.21%, 10=73.51% 00:26:18.809 cpu : usr=95.14%, sys=4.40%, ctx=9, majf=0, minf=60 00:26:18.809 IO depths : 1=0.2%, 2=12.2%, 4=59.5%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:18.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.809 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.809 issued rwts: total=9642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.809 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:18.809 filename1: (groupid=0, jobs=1): err= 0: pid=903852: Mon Jul 15 16:19:04 2024 00:26:18.809 read: IOPS=1915, BW=15.0MiB/s (15.7MB/s)(74.9MiB/5003msec) 00:26:18.809 slat (nsec): min=3867, max=76357, avg=18353.34, stdev=9566.92 00:26:18.809 clat (usec): min=799, max=8640, avg=4106.81, stdev=473.38 00:26:18.809 lat (usec): min=815, max=8652, avg=4125.16, stdev=473.72 00:26:18.809 clat percentiles (usec): 00:26:18.809 | 1.00th=[ 2900], 5.00th=[ 3589], 10.00th=[ 3818], 20.00th=[ 3949], 00:26:18.809 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:26:18.809 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4555], 00:26:18.809 | 99.00th=[ 6063], 99.50th=[ 6849], 99.90th=[ 7439], 99.95th=[ 8455], 00:26:18.809 | 99.99th=[ 8586] 00:26:18.809 bw ( KiB/s): min=14960, max=15872, per=25.02%, avg=15324.60, stdev=333.25, samples=10 00:26:18.809 iops : min= 1870, max= 1984, avg=1915.50, stdev=41.64, samples=10 00:26:18.809 lat (usec) : 1000=0.07% 00:26:18.809 lat (msec) : 2=0.46%, 4=29.83%, 10=69.64% 00:26:18.809 cpu : usr=95.08%, sys=4.34%, ctx=12, majf=0, minf=70 00:26:18.809 IO depths : 1=0.5%, 2=19.7%, 4=53.9%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:18.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.809 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.809 issued rwts: total=9584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.809 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:18.809 filename1: (groupid=0, jobs=1): err= 0: pid=903853: Mon Jul 15 16:19:04 2024 00:26:18.809 read: IOPS=1905, BW=14.9MiB/s (15.6MB/s)(74.5MiB/5001msec) 00:26:18.809 slat (nsec): min=4019, max=84707, avg=19439.52, stdev=9545.16 00:26:18.809 clat (usec): min=738, max=7532, avg=4120.57, stdev=537.60 00:26:18.809 lat (usec): min=751, max=7552, avg=4140.01, stdev=537.86 00:26:18.809 clat percentiles (usec): 00:26:18.809 | 1.00th=[ 2343], 5.00th=[ 3621], 10.00th=[ 3851], 20.00th=[ 3949], 00:26:18.809 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:26:18.809 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4817], 00:26:18.809 | 99.00th=[ 6652], 99.50th=[ 6980], 99.90th=[ 7242], 99.95th=[ 7373], 00:26:18.809 | 99.99th=[ 7504] 00:26:18.809 bw ( KiB/s): min=14880, max=15744, per=25.00%, avg=15317.22, stdev=303.13, samples=9 00:26:18.809 iops : min= 1860, max= 1968, avg=1914.56, stdev=37.78, samples=9 00:26:18.810 lat (usec) : 750=0.01%, 1000=0.15% 00:26:18.810 lat (msec) : 2=0.68%, 4=28.96%, 10=70.20% 00:26:18.810 cpu : usr=94.84%, sys=4.58%, ctx=7, majf=0, minf=120 00:26:18.810 IO depths : 1=0.3%, 2=20.6%, 4=53.2%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:18.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.810 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.810 issued rwts: total=9531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.810 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:18.810 00:26:18.810 Run status group 0 (all jobs): 00:26:18.810 READ: bw=59.8MiB/s (62.7MB/s), 14.9MiB/s-15.1MiB/s (15.6MB/s-15.8MB/s), io=299MiB (314MB), run=5001-5003msec 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.810 00:26:18.810 real 0m24.115s 00:26:18.810 user 4m35.197s 00:26:18.810 sys 0m5.909s 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:18.810 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:18.810 ************************************ 00:26:18.810 END TEST fio_dif_rand_params 00:26:18.810 ************************************ 00:26:18.810 16:19:04 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:18.810 16:19:04 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:18.810 16:19:04 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:18.810 16:19:04 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:18.810 16:19:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:18.810 ************************************ 00:26:18.810 START TEST fio_dif_digest 00:26:18.810 ************************************ 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:18.810 bdev_null0 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:18.810 [2024-07-15 16:19:04.544496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:18.810 { 00:26:18.810 "params": { 00:26:18.810 "name": "Nvme$subsystem", 00:26:18.810 "trtype": "$TEST_TRANSPORT", 00:26:18.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.810 "adrfam": "ipv4", 00:26:18.810 "trsvcid": "$NVMF_PORT", 00:26:18.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.810 "hdgst": ${hdgst:-false}, 00:26:18.810 "ddgst": ${ddgst:-false} 00:26:18.810 }, 00:26:18.810 "method": "bdev_nvme_attach_controller" 00:26:18.810 } 00:26:18.810 EOF 00:26:18.810 )") 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:18.810 "params": { 00:26:18.810 "name": "Nvme0", 00:26:18.810 "trtype": "tcp", 00:26:18.810 "traddr": "10.0.0.2", 00:26:18.810 "adrfam": "ipv4", 00:26:18.810 "trsvcid": "4420", 00:26:18.810 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:18.810 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:18.810 "hdgst": true, 00:26:18.810 "ddgst": true 00:26:18.810 }, 00:26:18.810 "method": "bdev_nvme_attach_controller" 00:26:18.810 }' 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:18.810 16:19:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:18.811 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:18.811 ... 00:26:18.811 fio-3.35 00:26:18.811 Starting 3 threads 00:26:19.069 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.344 00:26:31.344 filename0: (groupid=0, jobs=1): err= 0: pid=904608: Mon Jul 15 16:19:15 2024 00:26:31.344 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(254MiB/10044msec) 00:26:31.344 slat (nsec): min=7293, max=76356, avg=19051.99, stdev=5278.21 00:26:31.344 clat (usec): min=11708, max=50647, avg=14799.29, stdev=1439.84 00:26:31.344 lat (usec): min=11722, max=50668, avg=14818.34, stdev=1439.93 00:26:31.344 clat percentiles (usec): 00:26:31.344 | 1.00th=[12649], 5.00th=[13304], 10.00th=[13566], 20.00th=[13960], 00:26:31.344 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14746], 60.00th=[14877], 00:26:31.344 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16450], 00:26:31.344 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18744], 99.95th=[48497], 00:26:31.344 | 99.99th=[50594] 00:26:31.344 bw ( KiB/s): min=25344, max=26624, per=32.95%, avg=25958.40, stdev=375.14, samples=20 00:26:31.344 iops : min= 198, max= 208, avg=202.80, stdev= 2.93, samples=20 00:26:31.344 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:26:31.344 cpu : usr=93.99%, sys=5.51%, ctx=21, majf=0, minf=165 00:26:31.344 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:31.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.344 issued rwts: total=2030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.344 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:31.344 filename0: (groupid=0, jobs=1): err= 0: pid=904609: Mon Jul 15 16:19:15 2024 00:26:31.344 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(261MiB/10044msec) 00:26:31.344 slat (nsec): min=7289, max=50539, avg=17605.42, stdev=5102.23 00:26:31.344 clat (usec): min=10799, max=53697, avg=14368.51, stdev=1520.39 00:26:31.344 lat (usec): min=10840, max=53715, avg=14386.11, stdev=1520.27 00:26:31.344 clat percentiles (usec): 00:26:31.344 | 1.00th=[12125], 5.00th=[12780], 10.00th=[13173], 20.00th=[13566], 00:26:31.344 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14484], 00:26:31.344 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15533], 95.00th=[15926], 00:26:31.344 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17957], 99.95th=[51119], 00:26:31.344 | 99.99th=[53740] 00:26:31.344 bw ( KiB/s): min=25856, max=27904, per=33.95%, avg=26739.20, stdev=473.32, samples=20 00:26:31.344 iops : min= 202, max= 218, avg=208.90, stdev= 3.70, samples=20 00:26:31.344 lat (msec) : 20=99.90%, 100=0.10% 00:26:31.344 cpu : usr=94.04%, sys=5.48%, ctx=17, majf=0, minf=109 00:26:31.344 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:31.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.344 issued rwts: total=2091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.344 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:31.344 filename0: (groupid=0, jobs=1): err= 0: pid=904610: Mon Jul 15 16:19:15 2024 00:26:31.344 read: IOPS=205, BW=25.6MiB/s (26.9MB/s)(258MiB/10044msec) 00:26:31.344 slat (nsec): min=7412, max=42255, avg=16128.80, stdev=4540.23 00:26:31.344 clat (usec): min=11393, max=51878, avg=14586.86, stdev=1482.58 00:26:31.344 lat (usec): min=11411, max=51893, avg=14602.99, stdev=1482.58 00:26:31.344 clat percentiles (usec): 00:26:31.344 | 1.00th=[12387], 5.00th=[12911], 10.00th=[13304], 20.00th=[13829], 00:26:31.344 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14484], 60.00th=[14746], 00:26:31.344 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 00:26:31.344 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18744], 99.95th=[47973], 00:26:31.344 | 99.99th=[51643] 00:26:31.344 bw ( KiB/s): min=25856, max=26880, per=33.44%, avg=26345.00, stdev=329.49, samples=20 00:26:31.344 iops : min= 202, max= 210, avg=205.80, stdev= 2.59, samples=20 00:26:31.344 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:26:31.344 cpu : usr=93.46%, sys=6.07%, ctx=24, majf=0, minf=157 00:26:31.344 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:31.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.344 issued rwts: total=2060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.344 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:31.344 00:26:31.344 Run status group 0 (all jobs): 00:26:31.344 READ: bw=76.9MiB/s (80.7MB/s), 25.3MiB/s-26.0MiB/s (26.5MB/s-27.3MB/s), io=773MiB (810MB), run=10044-10044msec 00:26:31.344 16:19:15 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:31.344 16:19:15 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:31.344 16:19:15 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:31.344 16:19:15 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:31.344 16:19:15 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:31.344 16:19:15 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:31.344 16:19:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.344 16:19:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:31.344 16:19:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.344 16:19:15 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:31.344 16:19:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.344 16:19:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:31.344 16:19:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.344 00:26:31.344 real 0m11.269s 00:26:31.344 user 0m29.579s 00:26:31.344 sys 0m2.001s 00:26:31.344 16:19:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:31.344 16:19:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:31.344 ************************************ 00:26:31.344 END TEST fio_dif_digest 00:26:31.344 ************************************ 00:26:31.344 16:19:15 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:31.344 16:19:15 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:31.344 16:19:15 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:31.344 16:19:15 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:31.344 16:19:15 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:31.344 16:19:15 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:31.344 16:19:15 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:31.344 16:19:15 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:31.344 16:19:15 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:31.344 rmmod nvme_tcp 00:26:31.344 rmmod nvme_fabrics 00:26:31.344 rmmod nvme_keyring 00:26:31.344 16:19:15 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:31.344 16:19:15 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:31.344 16:19:15 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:31.344 16:19:15 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 898551 ']' 00:26:31.344 16:19:15 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 898551 00:26:31.344 16:19:15 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 898551 ']' 00:26:31.344 16:19:15 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 898551 00:26:31.344 16:19:15 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:26:31.344 16:19:15 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:31.344 16:19:15 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 898551 00:26:31.344 16:19:15 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:31.344 16:19:15 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:31.344 16:19:15 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 898551' 00:26:31.344 killing process with pid 898551 00:26:31.344 16:19:15 nvmf_dif -- common/autotest_common.sh@967 -- # kill 898551 00:26:31.344 16:19:15 nvmf_dif -- common/autotest_common.sh@972 -- # wait 898551 00:26:31.344 16:19:16 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:31.344 16:19:16 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:31.344 Waiting for block devices as requested 00:26:31.603 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:31.603 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:31.603 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:31.860 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:31.860 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:31.860 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:31.860 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:32.118 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:32.118 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:26:32.378 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:32.378 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:32.378 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:32.378 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:32.378 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:32.638 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:32.638 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:32.638 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:32.897 16:19:18 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:32.897 16:19:18 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:32.897 16:19:18 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:32.897 16:19:18 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:32.897 16:19:18 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.897 16:19:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:32.897 16:19:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.828 16:19:20 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:34.828 00:26:34.828 real 1m6.847s 00:26:34.828 user 6m32.181s 00:26:34.828 sys 0m17.309s 00:26:34.828 16:19:20 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:34.828 16:19:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:34.828 ************************************ 00:26:34.828 END TEST nvmf_dif 00:26:34.828 ************************************ 00:26:34.828 16:19:20 -- common/autotest_common.sh@1142 -- # return 0 00:26:34.828 16:19:20 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:34.828 16:19:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:34.828 16:19:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:34.828 16:19:20 -- common/autotest_common.sh@10 -- # set +x 00:26:34.828 ************************************ 00:26:34.828 START TEST nvmf_abort_qd_sizes 00:26:34.828 ************************************ 00:26:34.828 16:19:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:35.088 * Looking for test storage... 00:26:35.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:35.088 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:35.089 16:19:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:36.992 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:36.992 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:36.992 Found net devices under 0000:09:00.0: cvl_0_0 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:36.992 Found net devices under 0000:09:00.1: cvl_0_1 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:36.992 16:19:22 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:37.253 16:19:23 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:37.253 16:19:23 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:37.253 16:19:23 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:37.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:37.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:26:37.253 00:26:37.253 --- 10.0.0.2 ping statistics --- 00:26:37.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.253 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:26:37.253 16:19:23 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:37.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:37.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:26:37.253 00:26:37.253 --- 10.0.0.1 ping statistics --- 00:26:37.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.253 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:26:37.253 16:19:23 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:37.253 16:19:23 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:37.253 16:19:23 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:37.253 16:19:23 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:38.187 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:38.187 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:38.187 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:38.187 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:38.187 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:38.446 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:38.446 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:38.446 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:38.446 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:38.446 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:38.446 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:38.446 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:38.446 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:38.446 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:38.446 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:38.446 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:39.384 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=909518 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 909518 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 909518 ']' 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:39.384 16:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:39.643 [2024-07-15 16:19:25.429985] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:26:39.643 [2024-07-15 16:19:25.430068] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.643 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.643 [2024-07-15 16:19:25.492511] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:39.643 [2024-07-15 16:19:25.597609] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.643 [2024-07-15 16:19:25.597666] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.643 [2024-07-15 16:19:25.597689] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.643 [2024-07-15 16:19:25.597699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.643 [2024-07-15 16:19:25.597709] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.643 [2024-07-15 16:19:25.597785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.643 [2024-07-15 16:19:25.597846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:39.643 [2024-07-15 16:19:25.597915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.643 [2024-07-15 16:19:25.597913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:39.901 16:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:39.901 ************************************ 00:26:39.901 START TEST spdk_target_abort 00:26:39.901 ************************************ 00:26:39.901 16:19:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:26:39.901 16:19:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:39.901 16:19:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:26:39.901 16:19:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.901 16:19:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:43.186 spdk_targetn1 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:43.186 [2024-07-15 16:19:28.600769] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:43.186 [2024-07-15 16:19:28.633063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:43.186 16:19:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:43.186 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.474 Initializing NVMe Controllers 00:26:46.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:46.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:46.474 Initialization complete. Launching workers. 00:26:46.474 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12072, failed: 0 00:26:46.474 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1260, failed to submit 10812 00:26:46.474 success 744, unsuccess 516, failed 0 00:26:46.474 16:19:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:46.474 16:19:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:46.474 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.774 Initializing NVMe Controllers 00:26:49.774 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:49.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:49.774 Initialization complete. Launching workers. 00:26:49.774 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8571, failed: 0 00:26:49.774 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1266, failed to submit 7305 00:26:49.774 success 329, unsuccess 937, failed 0 00:26:49.774 16:19:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:49.774 16:19:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:49.774 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.076 Initializing NVMe Controllers 00:26:53.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:53.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:53.076 Initialization complete. Launching workers. 00:26:53.076 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31548, failed: 0 00:26:53.076 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2711, failed to submit 28837 00:26:53.076 success 493, unsuccess 2218, failed 0 00:26:53.076 16:19:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:26:53.076 16:19:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.076 16:19:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:53.076 16:19:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.076 16:19:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:53.076 16:19:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.076 16:19:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:54.011 16:19:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.011 16:19:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 909518 00:26:54.011 16:19:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 909518 ']' 00:26:54.011 16:19:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 909518 00:26:54.011 16:19:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:26:54.011 16:19:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:54.011 16:19:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 909518 00:26:54.011 16:19:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:54.011 16:19:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:54.011 16:19:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 909518' 00:26:54.011 killing process with pid 909518 00:26:54.011 16:19:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 909518 00:26:54.011 16:19:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 909518 00:26:54.270 00:26:54.270 real 0m14.259s 00:26:54.270 user 0m53.614s 00:26:54.270 sys 0m2.716s 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:54.270 ************************************ 00:26:54.270 END TEST spdk_target_abort 00:26:54.270 ************************************ 00:26:54.270 16:19:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:26:54.270 16:19:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:26:54.270 16:19:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:54.270 16:19:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:54.270 16:19:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:54.270 ************************************ 00:26:54.270 START TEST kernel_target_abort 00:26:54.270 ************************************ 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:54.270 16:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:55.205 Waiting for block devices as requested 00:26:55.205 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:55.465 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:55.465 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:55.465 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:55.725 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:55.725 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:55.725 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:55.725 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:55.985 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:26:55.985 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:56.243 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:56.243 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:56.243 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:56.243 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:56.502 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:56.502 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:56.502 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:56.760 No valid GPT data, bailing 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:26:56.760 00:26:56.760 Discovery Log Number of Records 2, Generation counter 2 00:26:56.760 =====Discovery Log Entry 0====== 00:26:56.760 trtype: tcp 00:26:56.760 adrfam: ipv4 00:26:56.760 subtype: current discovery subsystem 00:26:56.760 treq: not specified, sq flow control disable supported 00:26:56.760 portid: 1 00:26:56.760 trsvcid: 4420 00:26:56.760 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:56.760 traddr: 10.0.0.1 00:26:56.760 eflags: none 00:26:56.760 sectype: none 00:26:56.760 =====Discovery Log Entry 1====== 00:26:56.760 trtype: tcp 00:26:56.760 adrfam: ipv4 00:26:56.760 subtype: nvme subsystem 00:26:56.760 treq: not specified, sq flow control disable supported 00:26:56.760 portid: 1 00:26:56.760 trsvcid: 4420 00:26:56.760 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:56.760 traddr: 10.0.0.1 00:26:56.760 eflags: none 00:26:56.760 sectype: none 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:56.760 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:56.761 16:19:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:56.761 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.047 Initializing NVMe Controllers 00:27:00.047 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:00.047 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:00.047 Initialization complete. Launching workers. 00:27:00.047 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53921, failed: 0 00:27:00.047 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 53921, failed to submit 0 00:27:00.047 success 0, unsuccess 53921, failed 0 00:27:00.047 16:19:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:00.047 16:19:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:00.047 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.335 Initializing NVMe Controllers 00:27:03.335 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:03.335 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:03.335 Initialization complete. Launching workers. 00:27:03.335 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100490, failed: 0 00:27:03.335 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25214, failed to submit 75276 00:27:03.335 success 0, unsuccess 25214, failed 0 00:27:03.335 16:19:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:03.335 16:19:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:03.335 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.633 Initializing NVMe Controllers 00:27:06.633 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:06.633 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:06.633 Initialization complete. Launching workers. 00:27:06.633 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 97072, failed: 0 00:27:06.633 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24266, failed to submit 72806 00:27:06.633 success 0, unsuccess 24266, failed 0 00:27:06.633 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:06.633 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:06.633 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:06.633 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:06.633 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:06.633 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:06.633 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:06.633 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:06.633 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:06.633 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:07.568 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:07.568 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:07.568 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:07.568 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:07.568 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:07.568 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:07.568 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:07.568 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:07.568 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:07.568 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:07.568 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:07.568 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:07.568 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:07.568 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:07.568 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:07.568 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:08.504 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:27:08.504 00:27:08.504 real 0m14.407s 00:27:08.504 user 0m6.639s 00:27:08.504 sys 0m3.119s 00:27:08.504 16:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:08.504 16:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:08.504 ************************************ 00:27:08.504 END TEST kernel_target_abort 00:27:08.504 ************************************ 00:27:08.504 16:19:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:08.504 16:19:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:08.504 16:19:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:08.504 16:19:54 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:08.504 16:19:54 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:08.504 16:19:54 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:08.504 16:19:54 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:08.504 16:19:54 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:08.504 16:19:54 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:08.504 rmmod nvme_tcp 00:27:08.762 rmmod nvme_fabrics 00:27:08.762 rmmod nvme_keyring 00:27:08.762 16:19:54 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:08.762 16:19:54 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:08.762 16:19:54 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:08.762 16:19:54 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 909518 ']' 00:27:08.762 16:19:54 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 909518 00:27:08.762 16:19:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 909518 ']' 00:27:08.762 16:19:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 909518 00:27:08.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (909518) - No such process 00:27:08.762 16:19:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 909518 is not found' 00:27:08.762 Process with pid 909518 is not found 00:27:08.762 16:19:54 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:08.762 16:19:54 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:09.701 Waiting for block devices as requested 00:27:09.961 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:09.961 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:09.961 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:10.220 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:10.220 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:10.220 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:10.220 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:10.479 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:10.479 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:27:10.479 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:10.738 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:10.738 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:10.738 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:10.738 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:10.998 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:10.998 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:10.998 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:11.258 16:19:57 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:11.258 16:19:57 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:11.258 16:19:57 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:11.258 16:19:57 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:11.258 16:19:57 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.258 16:19:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:11.258 16:19:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.185 16:19:59 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:13.185 00:27:13.185 real 0m38.326s 00:27:13.185 user 1m2.415s 00:27:13.185 sys 0m9.302s 00:27:13.185 16:19:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:13.185 16:19:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:13.185 ************************************ 00:27:13.185 END TEST nvmf_abort_qd_sizes 00:27:13.185 ************************************ 00:27:13.185 16:19:59 -- common/autotest_common.sh@1142 -- # return 0 00:27:13.185 16:19:59 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:13.185 16:19:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:13.185 16:19:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:13.185 16:19:59 -- common/autotest_common.sh@10 -- # set +x 00:27:13.185 ************************************ 00:27:13.185 START TEST keyring_file 00:27:13.185 ************************************ 00:27:13.185 16:19:59 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:13.444 * Looking for test storage... 00:27:13.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:13.444 16:19:59 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:13.444 16:19:59 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:13.445 16:19:59 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:13.445 16:19:59 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:13.445 16:19:59 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:13.445 16:19:59 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.445 16:19:59 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.445 16:19:59 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.445 16:19:59 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:13.445 16:19:59 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:13.445 16:19:59 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:13.445 16:19:59 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:13.445 16:19:59 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:13.445 16:19:59 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:13.445 16:19:59 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:13.445 16:19:59 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DC2PhtZKE8 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DC2PhtZKE8 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DC2PhtZKE8 00:27:13.445 16:19:59 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.DC2PhtZKE8 00:27:13.445 16:19:59 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.X42glizuDM 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:13.445 16:19:59 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.X42glizuDM 00:27:13.445 16:19:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.X42glizuDM 00:27:13.445 16:19:59 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.X42glizuDM 00:27:13.445 16:19:59 keyring_file -- keyring/file.sh@30 -- # tgtpid=915294 00:27:13.445 16:19:59 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:13.445 16:19:59 keyring_file -- keyring/file.sh@32 -- # waitforlisten 915294 00:27:13.445 16:19:59 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 915294 ']' 00:27:13.445 16:19:59 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.445 16:19:59 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:13.445 16:19:59 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.445 16:19:59 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:13.445 16:19:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:13.445 [2024-07-15 16:19:59.412452] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:27:13.445 [2024-07-15 16:19:59.412554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid915294 ] 00:27:13.445 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.705 [2024-07-15 16:19:59.471168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.705 [2024-07-15 16:19:59.584102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.963 16:19:59 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:13.963 16:19:59 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:13.963 16:19:59 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:13.963 16:19:59 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:13.964 [2024-07-15 16:19:59.824615] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.964 null0 00:27:13.964 [2024-07-15 16:19:59.856674] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:13.964 [2024-07-15 16:19:59.857120] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:13.964 [2024-07-15 16:19:59.864689] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.964 16:19:59 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:13.964 [2024-07-15 16:19:59.872700] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:13.964 request: 00:27:13.964 { 00:27:13.964 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:13.964 "secure_channel": false, 00:27:13.964 "listen_address": { 00:27:13.964 "trtype": "tcp", 00:27:13.964 "traddr": "127.0.0.1", 00:27:13.964 "trsvcid": "4420" 00:27:13.964 }, 00:27:13.964 "method": "nvmf_subsystem_add_listener", 00:27:13.964 "req_id": 1 00:27:13.964 } 00:27:13.964 Got JSON-RPC error response 00:27:13.964 response: 00:27:13.964 { 00:27:13.964 "code": -32602, 00:27:13.964 "message": "Invalid parameters" 00:27:13.964 } 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:13.964 16:19:59 keyring_file -- keyring/file.sh@46 -- # bperfpid=915315 00:27:13.964 16:19:59 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:13.964 16:19:59 keyring_file -- keyring/file.sh@48 -- # waitforlisten 915315 /var/tmp/bperf.sock 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 915315 ']' 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:13.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:13.964 16:19:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:13.964 [2024-07-15 16:19:59.917694] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:27:13.964 [2024-07-15 16:19:59.917772] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid915315 ] 00:27:13.964 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.221 [2024-07-15 16:19:59.974926] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.221 [2024-07-15 16:20:00.089278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.221 16:20:00 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:14.221 16:20:00 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:14.221 16:20:00 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DC2PhtZKE8 00:27:14.221 16:20:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DC2PhtZKE8 00:27:14.478 16:20:00 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.X42glizuDM 00:27:14.478 16:20:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.X42glizuDM 00:27:14.736 16:20:00 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:14.736 16:20:00 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:14.736 16:20:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:14.736 16:20:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:14.736 16:20:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:14.994 16:20:00 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.DC2PhtZKE8 == \/\t\m\p\/\t\m\p\.\D\C\2\P\h\t\Z\K\E\8 ]] 00:27:14.994 16:20:00 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:14.994 16:20:00 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:14.994 16:20:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:14.994 16:20:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:14.994 16:20:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:15.253 16:20:01 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.X42glizuDM == \/\t\m\p\/\t\m\p\.\X\4\2\g\l\i\z\u\D\M ]] 00:27:15.253 16:20:01 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:15.253 16:20:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:15.253 16:20:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:15.253 16:20:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:15.253 16:20:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:15.253 16:20:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:15.510 16:20:01 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:15.510 16:20:01 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:15.510 16:20:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:15.510 16:20:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:15.510 16:20:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:15.510 16:20:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:15.510 16:20:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:15.768 16:20:01 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:15.768 16:20:01 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:15.768 16:20:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:16.025 [2024-07-15 16:20:01.907826] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:16.025 nvme0n1 00:27:16.025 16:20:01 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:16.025 16:20:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:16.025 16:20:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:16.025 16:20:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:16.025 16:20:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:16.025 16:20:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:16.283 16:20:02 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:16.283 16:20:02 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:16.283 16:20:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:16.283 16:20:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:16.283 16:20:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:16.283 16:20:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:16.283 16:20:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:16.539 16:20:02 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:16.539 16:20:02 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:16.795 Running I/O for 1 seconds... 00:27:17.726 00:27:17.726 Latency(us) 00:27:17.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.726 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:17.726 nvme0n1 : 1.01 10002.26 39.07 0.00 0.00 12750.94 4611.79 19709.35 00:27:17.726 =================================================================================================================== 00:27:17.726 Total : 10002.26 39.07 0.00 0.00 12750.94 4611.79 19709.35 00:27:17.726 0 00:27:17.726 16:20:03 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:17.726 16:20:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:17.984 16:20:03 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:17.984 16:20:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:17.984 16:20:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:17.984 16:20:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:17.984 16:20:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:17.984 16:20:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:18.241 16:20:04 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:18.241 16:20:04 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:18.241 16:20:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:18.241 16:20:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:18.241 16:20:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:18.241 16:20:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:18.241 16:20:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:18.499 16:20:04 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:18.499 16:20:04 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:18.499 16:20:04 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:18.499 16:20:04 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:18.499 16:20:04 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:18.499 16:20:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:18.499 16:20:04 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:18.499 16:20:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:18.499 16:20:04 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:18.499 16:20:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:18.757 [2024-07-15 16:20:04.584794] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:18.757 [2024-07-15 16:20:04.585263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa939a0 (107): Transport endpoint is not connected 00:27:18.757 [2024-07-15 16:20:04.586255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa939a0 (9): Bad file descriptor 00:27:18.757 [2024-07-15 16:20:04.587254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:18.757 [2024-07-15 16:20:04.587281] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:18.757 [2024-07-15 16:20:04.587295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:18.757 request: 00:27:18.757 { 00:27:18.757 "name": "nvme0", 00:27:18.757 "trtype": "tcp", 00:27:18.757 "traddr": "127.0.0.1", 00:27:18.757 "adrfam": "ipv4", 00:27:18.757 "trsvcid": "4420", 00:27:18.757 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:18.757 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:18.757 "prchk_reftag": false, 00:27:18.757 "prchk_guard": false, 00:27:18.757 "hdgst": false, 00:27:18.757 "ddgst": false, 00:27:18.757 "psk": "key1", 00:27:18.757 "method": "bdev_nvme_attach_controller", 00:27:18.757 "req_id": 1 00:27:18.757 } 00:27:18.757 Got JSON-RPC error response 00:27:18.757 response: 00:27:18.757 { 00:27:18.757 "code": -5, 00:27:18.757 "message": "Input/output error" 00:27:18.757 } 00:27:18.757 16:20:04 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:18.757 16:20:04 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:18.757 16:20:04 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:18.757 16:20:04 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:18.757 16:20:04 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:18.757 16:20:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:18.757 16:20:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:18.757 16:20:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:18.757 16:20:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:18.757 16:20:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:19.014 16:20:04 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:19.014 16:20:04 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:19.014 16:20:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:19.014 16:20:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:19.014 16:20:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:19.014 16:20:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:19.014 16:20:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:19.271 16:20:05 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:19.271 16:20:05 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:19.271 16:20:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:19.528 16:20:05 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:19.528 16:20:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:19.785 16:20:05 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:19.785 16:20:05 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:19.785 16:20:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:20.043 16:20:05 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:20.043 16:20:05 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.DC2PhtZKE8 00:27:20.043 16:20:05 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.DC2PhtZKE8 00:27:20.043 16:20:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:20.043 16:20:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.DC2PhtZKE8 00:27:20.043 16:20:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:20.043 16:20:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:20.043 16:20:05 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:20.043 16:20:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:20.043 16:20:05 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DC2PhtZKE8 00:27:20.043 16:20:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DC2PhtZKE8 00:27:20.300 [2024-07-15 16:20:06.073376] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DC2PhtZKE8': 0100660 00:27:20.300 [2024-07-15 16:20:06.073415] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:20.300 request: 00:27:20.300 { 00:27:20.300 "name": "key0", 00:27:20.300 "path": "/tmp/tmp.DC2PhtZKE8", 00:27:20.300 "method": "keyring_file_add_key", 00:27:20.300 "req_id": 1 00:27:20.300 } 00:27:20.300 Got JSON-RPC error response 00:27:20.300 response: 00:27:20.300 { 00:27:20.300 "code": -1, 00:27:20.300 "message": "Operation not permitted" 00:27:20.300 } 00:27:20.300 16:20:06 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:20.300 16:20:06 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:20.300 16:20:06 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:20.300 16:20:06 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:20.300 16:20:06 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.DC2PhtZKE8 00:27:20.300 16:20:06 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DC2PhtZKE8 00:27:20.300 16:20:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DC2PhtZKE8 00:27:20.556 16:20:06 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.DC2PhtZKE8 00:27:20.556 16:20:06 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:20.556 16:20:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:20.556 16:20:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:20.556 16:20:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:20.556 16:20:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:20.556 16:20:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:20.814 16:20:06 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:20.814 16:20:06 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:20.814 16:20:06 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:20.814 16:20:06 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:20.814 16:20:06 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:20.814 16:20:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:20.814 16:20:06 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:20.814 16:20:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:20.814 16:20:06 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:20.814 16:20:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:20.814 [2024-07-15 16:20:06.803358] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.DC2PhtZKE8': No such file or directory 00:27:20.814 [2024-07-15 16:20:06.803389] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:20.814 [2024-07-15 16:20:06.803429] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:20.814 [2024-07-15 16:20:06.803440] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:20.814 [2024-07-15 16:20:06.803451] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:20.814 request: 00:27:20.814 { 00:27:20.814 "name": "nvme0", 00:27:20.814 "trtype": "tcp", 00:27:20.814 "traddr": "127.0.0.1", 00:27:20.814 "adrfam": "ipv4", 00:27:20.814 "trsvcid": "4420", 00:27:20.814 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:20.814 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:20.814 "prchk_reftag": false, 00:27:20.814 "prchk_guard": false, 00:27:20.814 "hdgst": false, 00:27:20.814 "ddgst": false, 00:27:20.814 "psk": "key0", 00:27:20.814 "method": "bdev_nvme_attach_controller", 00:27:20.814 "req_id": 1 00:27:20.814 } 00:27:20.814 Got JSON-RPC error response 00:27:20.814 response: 00:27:20.814 { 00:27:20.814 "code": -19, 00:27:20.814 "message": "No such device" 00:27:20.814 } 00:27:21.071 16:20:06 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:21.071 16:20:06 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:21.071 16:20:06 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:21.071 16:20:06 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:21.071 16:20:06 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:21.071 16:20:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:21.328 16:20:07 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:21.328 16:20:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:21.328 16:20:07 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:21.328 16:20:07 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:21.328 16:20:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:21.328 16:20:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:21.328 16:20:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fDGzsDT57L 00:27:21.328 16:20:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:21.328 16:20:07 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:21.328 16:20:07 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.328 16:20:07 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:21.328 16:20:07 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:21.328 16:20:07 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:21.328 16:20:07 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:21.328 16:20:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fDGzsDT57L 00:27:21.328 16:20:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fDGzsDT57L 00:27:21.328 16:20:07 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.fDGzsDT57L 00:27:21.328 16:20:07 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fDGzsDT57L 00:27:21.328 16:20:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fDGzsDT57L 00:27:21.586 16:20:07 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:21.586 16:20:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:21.843 nvme0n1 00:27:21.843 16:20:07 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:21.843 16:20:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:21.843 16:20:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:21.843 16:20:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:21.843 16:20:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:21.843 16:20:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:22.100 16:20:07 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:22.100 16:20:07 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:22.100 16:20:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:22.357 16:20:08 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:22.357 16:20:08 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:22.357 16:20:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:22.357 16:20:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:22.357 16:20:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:22.614 16:20:08 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:22.614 16:20:08 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:22.614 16:20:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:22.614 16:20:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:22.614 16:20:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:22.614 16:20:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:22.614 16:20:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:22.872 16:20:08 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:22.872 16:20:08 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:22.872 16:20:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:23.129 16:20:08 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:23.129 16:20:08 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:23.129 16:20:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:23.386 16:20:09 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:23.386 16:20:09 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fDGzsDT57L 00:27:23.386 16:20:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fDGzsDT57L 00:27:23.642 16:20:09 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.X42glizuDM 00:27:23.642 16:20:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.X42glizuDM 00:27:23.898 16:20:09 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:23.898 16:20:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:24.155 nvme0n1 00:27:24.155 16:20:09 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:24.155 16:20:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:24.412 16:20:10 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:24.412 "subsystems": [ 00:27:24.412 { 00:27:24.412 "subsystem": "keyring", 00:27:24.412 "config": [ 00:27:24.412 { 00:27:24.412 "method": "keyring_file_add_key", 00:27:24.412 "params": { 00:27:24.412 "name": "key0", 00:27:24.412 "path": "/tmp/tmp.fDGzsDT57L" 00:27:24.412 } 00:27:24.412 }, 00:27:24.412 { 00:27:24.412 "method": "keyring_file_add_key", 00:27:24.412 "params": { 00:27:24.412 "name": "key1", 00:27:24.412 "path": "/tmp/tmp.X42glizuDM" 00:27:24.412 } 00:27:24.412 } 00:27:24.412 ] 00:27:24.412 }, 00:27:24.412 { 00:27:24.412 "subsystem": "iobuf", 00:27:24.412 "config": [ 00:27:24.412 { 00:27:24.412 "method": "iobuf_set_options", 00:27:24.412 "params": { 00:27:24.412 "small_pool_count": 8192, 00:27:24.412 "large_pool_count": 1024, 00:27:24.412 "small_bufsize": 8192, 00:27:24.412 "large_bufsize": 135168 00:27:24.412 } 00:27:24.412 } 00:27:24.412 ] 00:27:24.412 }, 00:27:24.412 { 00:27:24.412 "subsystem": "sock", 00:27:24.412 "config": [ 00:27:24.412 { 00:27:24.412 "method": "sock_set_default_impl", 00:27:24.412 "params": { 00:27:24.412 "impl_name": "posix" 00:27:24.412 } 00:27:24.412 }, 00:27:24.412 { 00:27:24.412 "method": "sock_impl_set_options", 00:27:24.412 "params": { 00:27:24.412 "impl_name": "ssl", 00:27:24.412 "recv_buf_size": 4096, 00:27:24.412 "send_buf_size": 4096, 00:27:24.412 "enable_recv_pipe": true, 00:27:24.412 "enable_quickack": false, 00:27:24.412 "enable_placement_id": 0, 00:27:24.412 "enable_zerocopy_send_server": true, 00:27:24.412 "enable_zerocopy_send_client": false, 00:27:24.412 "zerocopy_threshold": 0, 00:27:24.412 "tls_version": 0, 00:27:24.412 "enable_ktls": false 00:27:24.412 } 00:27:24.412 }, 00:27:24.412 { 00:27:24.412 "method": "sock_impl_set_options", 00:27:24.412 "params": { 00:27:24.412 "impl_name": "posix", 00:27:24.412 "recv_buf_size": 2097152, 00:27:24.412 "send_buf_size": 2097152, 00:27:24.412 "enable_recv_pipe": true, 00:27:24.412 "enable_quickack": false, 00:27:24.412 "enable_placement_id": 0, 00:27:24.412 "enable_zerocopy_send_server": true, 00:27:24.412 "enable_zerocopy_send_client": false, 00:27:24.412 "zerocopy_threshold": 0, 00:27:24.412 "tls_version": 0, 00:27:24.412 "enable_ktls": false 00:27:24.412 } 00:27:24.412 } 00:27:24.412 ] 00:27:24.412 }, 00:27:24.412 { 00:27:24.412 "subsystem": "vmd", 00:27:24.412 "config": [] 00:27:24.412 }, 00:27:24.412 { 00:27:24.412 "subsystem": "accel", 00:27:24.412 "config": [ 00:27:24.412 { 00:27:24.412 "method": "accel_set_options", 00:27:24.412 "params": { 00:27:24.412 "small_cache_size": 128, 00:27:24.412 "large_cache_size": 16, 00:27:24.412 "task_count": 2048, 00:27:24.412 "sequence_count": 2048, 00:27:24.412 "buf_count": 2048 00:27:24.412 } 00:27:24.412 } 00:27:24.412 ] 00:27:24.412 }, 00:27:24.412 { 00:27:24.412 "subsystem": "bdev", 00:27:24.412 "config": [ 00:27:24.412 { 00:27:24.412 "method": "bdev_set_options", 00:27:24.412 "params": { 00:27:24.412 "bdev_io_pool_size": 65535, 00:27:24.413 "bdev_io_cache_size": 256, 00:27:24.413 "bdev_auto_examine": true, 00:27:24.413 "iobuf_small_cache_size": 128, 00:27:24.413 "iobuf_large_cache_size": 16 00:27:24.413 } 00:27:24.413 }, 00:27:24.413 { 00:27:24.413 "method": "bdev_raid_set_options", 00:27:24.413 "params": { 00:27:24.413 "process_window_size_kb": 1024 00:27:24.413 } 00:27:24.413 }, 00:27:24.413 { 00:27:24.413 "method": "bdev_iscsi_set_options", 00:27:24.413 "params": { 00:27:24.413 "timeout_sec": 30 00:27:24.413 } 00:27:24.413 }, 00:27:24.413 { 00:27:24.413 "method": "bdev_nvme_set_options", 00:27:24.413 "params": { 00:27:24.413 "action_on_timeout": "none", 00:27:24.413 "timeout_us": 0, 00:27:24.413 "timeout_admin_us": 0, 00:27:24.413 "keep_alive_timeout_ms": 10000, 00:27:24.413 "arbitration_burst": 0, 00:27:24.413 "low_priority_weight": 0, 00:27:24.413 "medium_priority_weight": 0, 00:27:24.413 "high_priority_weight": 0, 00:27:24.413 "nvme_adminq_poll_period_us": 10000, 00:27:24.413 "nvme_ioq_poll_period_us": 0, 00:27:24.413 "io_queue_requests": 512, 00:27:24.413 "delay_cmd_submit": true, 00:27:24.413 "transport_retry_count": 4, 00:27:24.413 "bdev_retry_count": 3, 00:27:24.413 "transport_ack_timeout": 0, 00:27:24.413 "ctrlr_loss_timeout_sec": 0, 00:27:24.413 "reconnect_delay_sec": 0, 00:27:24.413 "fast_io_fail_timeout_sec": 0, 00:27:24.413 "disable_auto_failback": false, 00:27:24.413 "generate_uuids": false, 00:27:24.413 "transport_tos": 0, 00:27:24.413 "nvme_error_stat": false, 00:27:24.413 "rdma_srq_size": 0, 00:27:24.413 "io_path_stat": false, 00:27:24.413 "allow_accel_sequence": false, 00:27:24.413 "rdma_max_cq_size": 0, 00:27:24.413 "rdma_cm_event_timeout_ms": 0, 00:27:24.413 "dhchap_digests": [ 00:27:24.413 "sha256", 00:27:24.413 "sha384", 00:27:24.413 "sha512" 00:27:24.413 ], 00:27:24.413 "dhchap_dhgroups": [ 00:27:24.413 "null", 00:27:24.413 "ffdhe2048", 00:27:24.413 "ffdhe3072", 00:27:24.413 "ffdhe4096", 00:27:24.413 "ffdhe6144", 00:27:24.413 "ffdhe8192" 00:27:24.413 ] 00:27:24.413 } 00:27:24.413 }, 00:27:24.413 { 00:27:24.413 "method": "bdev_nvme_attach_controller", 00:27:24.413 "params": { 00:27:24.413 "name": "nvme0", 00:27:24.413 "trtype": "TCP", 00:27:24.413 "adrfam": "IPv4", 00:27:24.413 "traddr": "127.0.0.1", 00:27:24.413 "trsvcid": "4420", 00:27:24.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:24.413 "prchk_reftag": false, 00:27:24.413 "prchk_guard": false, 00:27:24.413 "ctrlr_loss_timeout_sec": 0, 00:27:24.413 "reconnect_delay_sec": 0, 00:27:24.413 "fast_io_fail_timeout_sec": 0, 00:27:24.413 "psk": "key0", 00:27:24.413 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:24.413 "hdgst": false, 00:27:24.413 "ddgst": false 00:27:24.413 } 00:27:24.413 }, 00:27:24.413 { 00:27:24.413 "method": "bdev_nvme_set_hotplug", 00:27:24.413 "params": { 00:27:24.413 "period_us": 100000, 00:27:24.413 "enable": false 00:27:24.413 } 00:27:24.413 }, 00:27:24.413 { 00:27:24.413 "method": "bdev_wait_for_examine" 00:27:24.413 } 00:27:24.413 ] 00:27:24.413 }, 00:27:24.413 { 00:27:24.413 "subsystem": "nbd", 00:27:24.413 "config": [] 00:27:24.413 } 00:27:24.413 ] 00:27:24.413 }' 00:27:24.413 16:20:10 keyring_file -- keyring/file.sh@114 -- # killprocess 915315 00:27:24.413 16:20:10 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 915315 ']' 00:27:24.413 16:20:10 keyring_file -- common/autotest_common.sh@952 -- # kill -0 915315 00:27:24.413 16:20:10 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:24.413 16:20:10 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:24.413 16:20:10 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 915315 00:27:24.413 16:20:10 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:24.413 16:20:10 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:24.413 16:20:10 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 915315' 00:27:24.413 killing process with pid 915315 00:27:24.413 16:20:10 keyring_file -- common/autotest_common.sh@967 -- # kill 915315 00:27:24.413 Received shutdown signal, test time was about 1.000000 seconds 00:27:24.413 00:27:24.413 Latency(us) 00:27:24.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.413 =================================================================================================================== 00:27:24.413 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:24.413 16:20:10 keyring_file -- common/autotest_common.sh@972 -- # wait 915315 00:27:24.685 16:20:10 keyring_file -- keyring/file.sh@117 -- # bperfpid=916763 00:27:24.685 16:20:10 keyring_file -- keyring/file.sh@119 -- # waitforlisten 916763 /var/tmp/bperf.sock 00:27:24.685 16:20:10 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 916763 ']' 00:27:24.685 16:20:10 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:24.685 16:20:10 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:24.685 16:20:10 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:24.685 16:20:10 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:24.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:24.685 16:20:10 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:24.685 "subsystems": [ 00:27:24.685 { 00:27:24.685 "subsystem": "keyring", 00:27:24.685 "config": [ 00:27:24.685 { 00:27:24.685 "method": "keyring_file_add_key", 00:27:24.685 "params": { 00:27:24.685 "name": "key0", 00:27:24.685 "path": "/tmp/tmp.fDGzsDT57L" 00:27:24.685 } 00:27:24.685 }, 00:27:24.685 { 00:27:24.685 "method": "keyring_file_add_key", 00:27:24.685 "params": { 00:27:24.685 "name": "key1", 00:27:24.685 "path": "/tmp/tmp.X42glizuDM" 00:27:24.685 } 00:27:24.685 } 00:27:24.685 ] 00:27:24.685 }, 00:27:24.685 { 00:27:24.685 "subsystem": "iobuf", 00:27:24.685 "config": [ 00:27:24.685 { 00:27:24.685 "method": "iobuf_set_options", 00:27:24.685 "params": { 00:27:24.685 "small_pool_count": 8192, 00:27:24.685 "large_pool_count": 1024, 00:27:24.685 "small_bufsize": 8192, 00:27:24.685 "large_bufsize": 135168 00:27:24.685 } 00:27:24.685 } 00:27:24.685 ] 00:27:24.685 }, 00:27:24.685 { 00:27:24.685 "subsystem": "sock", 00:27:24.685 "config": [ 00:27:24.685 { 00:27:24.685 "method": "sock_set_default_impl", 00:27:24.685 "params": { 00:27:24.685 "impl_name": "posix" 00:27:24.685 } 00:27:24.685 }, 00:27:24.685 { 00:27:24.685 "method": "sock_impl_set_options", 00:27:24.685 "params": { 00:27:24.685 "impl_name": "ssl", 00:27:24.685 "recv_buf_size": 4096, 00:27:24.685 "send_buf_size": 4096, 00:27:24.685 "enable_recv_pipe": true, 00:27:24.685 "enable_quickack": false, 00:27:24.685 "enable_placement_id": 0, 00:27:24.685 "enable_zerocopy_send_server": true, 00:27:24.685 "enable_zerocopy_send_client": false, 00:27:24.685 "zerocopy_threshold": 0, 00:27:24.685 "tls_version": 0, 00:27:24.685 "enable_ktls": false 00:27:24.685 } 00:27:24.685 }, 00:27:24.685 { 00:27:24.685 "method": "sock_impl_set_options", 00:27:24.685 "params": { 00:27:24.685 "impl_name": "posix", 00:27:24.685 "recv_buf_size": 2097152, 00:27:24.685 "send_buf_size": 2097152, 00:27:24.685 "enable_recv_pipe": true, 00:27:24.685 "enable_quickack": false, 00:27:24.685 "enable_placement_id": 0, 00:27:24.685 "enable_zerocopy_send_server": true, 00:27:24.685 "enable_zerocopy_send_client": false, 00:27:24.685 "zerocopy_threshold": 0, 00:27:24.685 "tls_version": 0, 00:27:24.685 "enable_ktls": false 00:27:24.685 } 00:27:24.685 } 00:27:24.685 ] 00:27:24.685 }, 00:27:24.685 { 00:27:24.685 "subsystem": "vmd", 00:27:24.685 "config": [] 00:27:24.685 }, 00:27:24.685 { 00:27:24.685 "subsystem": "accel", 00:27:24.685 "config": [ 00:27:24.685 { 00:27:24.685 "method": "accel_set_options", 00:27:24.685 "params": { 00:27:24.685 "small_cache_size": 128, 00:27:24.685 "large_cache_size": 16, 00:27:24.685 "task_count": 2048, 00:27:24.685 "sequence_count": 2048, 00:27:24.685 "buf_count": 2048 00:27:24.685 } 00:27:24.685 } 00:27:24.685 ] 00:27:24.685 }, 00:27:24.685 { 00:27:24.685 "subsystem": "bdev", 00:27:24.685 "config": [ 00:27:24.685 { 00:27:24.685 "method": "bdev_set_options", 00:27:24.685 "params": { 00:27:24.685 "bdev_io_pool_size": 65535, 00:27:24.685 "bdev_io_cache_size": 256, 00:27:24.685 "bdev_auto_examine": true, 00:27:24.685 "iobuf_small_cache_size": 128, 00:27:24.685 "iobuf_large_cache_size": 16 00:27:24.685 } 00:27:24.685 }, 00:27:24.685 { 00:27:24.685 "method": "bdev_raid_set_options", 00:27:24.685 "params": { 00:27:24.685 "process_window_size_kb": 1024 00:27:24.685 } 00:27:24.685 }, 00:27:24.685 { 00:27:24.685 "method": "bdev_iscsi_set_options", 00:27:24.685 "params": { 00:27:24.685 "timeout_sec": 30 00:27:24.685 } 00:27:24.685 }, 00:27:24.685 { 00:27:24.685 "method": "bdev_nvme_set_options", 00:27:24.685 "params": { 00:27:24.685 "action_on_timeout": "none", 00:27:24.685 "timeout_us": 0, 00:27:24.685 "timeout_admin_us": 0, 00:27:24.685 "keep_alive_timeout_ms": 10000, 00:27:24.685 "arbitration_burst": 0, 00:27:24.685 "low_priority_weight": 0, 00:27:24.685 "medium_priority_weight": 0, 00:27:24.685 "high_priority_weight": 0, 00:27:24.685 "nvme_adminq_poll_period_us": 10000, 00:27:24.686 "nvme_ioq_poll_period_us": 0, 00:27:24.686 "io_queue_requests": 512, 00:27:24.686 "delay_cmd_submit": true, 00:27:24.686 "transport_retry_count": 4, 00:27:24.686 "bdev_retry_count": 3, 00:27:24.686 "transport_ack_timeout": 0, 00:27:24.686 "ctrlr_loss_timeout_sec": 0, 00:27:24.686 "reconnect_delay_sec": 0, 00:27:24.686 "fast_io_fail_timeout_sec": 0, 00:27:24.686 "disable_auto_failback": false, 00:27:24.686 "generate_uuids": false, 00:27:24.686 "transport_tos": 0, 00:27:24.686 "nvme_error_stat": false, 00:27:24.686 "rdma_srq_size": 0, 00:27:24.686 "io_path_stat": false, 00:27:24.686 "allow_accel_sequence": false, 00:27:24.686 "rdma_max_cq_size": 0, 00:27:24.686 "rdma_cm_event_timeout_ms": 0, 00:27:24.686 "dhchap_digests": [ 00:27:24.686 "sha256", 00:27:24.686 "sha384", 00:27:24.686 "sha512" 00:27:24.686 ], 00:27:24.686 "dhchap_dhgroups": [ 00:27:24.686 "null", 00:27:24.686 "ffdhe2048", 00:27:24.686 "ffdhe3072", 00:27:24.686 "ffdhe4096", 00:27:24.686 "ffdhe6144", 00:27:24.686 "ffdhe8192" 00:27:24.686 ] 00:27:24.686 } 00:27:24.686 }, 00:27:24.686 { 00:27:24.686 "method": "bdev_nvme_attach_controller", 00:27:24.686 "params": { 00:27:24.686 "name": "nvme0", 00:27:24.686 "trtype": "TCP", 00:27:24.686 "adrfam": "IPv4", 00:27:24.686 "traddr": "127.0.0.1", 00:27:24.686 "trsvcid": "4420", 00:27:24.686 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:24.686 "prchk_reftag": false, 00:27:24.686 "prchk_guard": false, 00:27:24.686 "ctrlr_loss_timeout_sec": 0, 00:27:24.686 "reconnect_delay_sec": 0, 00:27:24.686 "fast_io_fail_timeout_sec": 0, 00:27:24.686 "psk": "key0", 00:27:24.686 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:24.686 "hdgst": false, 00:27:24.686 "ddgst": false 00:27:24.686 } 00:27:24.686 }, 00:27:24.686 { 00:27:24.686 "method": "bdev_nvme_set_hotplug", 00:27:24.686 "params": { 00:27:24.686 "period_us": 100000, 00:27:24.686 "enable": false 00:27:24.686 } 00:27:24.686 }, 00:27:24.686 { 00:27:24.686 "method": "bdev_wait_for_examine" 00:27:24.686 } 00:27:24.686 ] 00:27:24.686 }, 00:27:24.686 { 00:27:24.686 "subsystem": "nbd", 00:27:24.686 "config": [] 00:27:24.686 } 00:27:24.686 ] 00:27:24.686 }' 00:27:24.686 16:20:10 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:24.686 16:20:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:24.686 [2024-07-15 16:20:10.620797] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:27:24.686 [2024-07-15 16:20:10.620869] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid916763 ] 00:27:24.686 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.686 [2024-07-15 16:20:10.677771] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.942 [2024-07-15 16:20:10.782790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.198 [2024-07-15 16:20:10.965515] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:25.794 16:20:11 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:25.794 16:20:11 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:25.794 16:20:11 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:25.794 16:20:11 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:25.794 16:20:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:26.058 16:20:11 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:26.058 16:20:11 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:26.058 16:20:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:26.058 16:20:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:26.058 16:20:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:26.058 16:20:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:26.058 16:20:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:26.316 16:20:12 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:26.316 16:20:12 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:26.316 16:20:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:26.316 16:20:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:26.316 16:20:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:26.316 16:20:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:26.316 16:20:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:26.316 16:20:12 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:26.316 16:20:12 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:26.316 16:20:12 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:26.316 16:20:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:26.573 16:20:12 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:26.573 16:20:12 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:26.573 16:20:12 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.fDGzsDT57L /tmp/tmp.X42glizuDM 00:27:26.573 16:20:12 keyring_file -- keyring/file.sh@20 -- # killprocess 916763 00:27:26.573 16:20:12 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 916763 ']' 00:27:26.573 16:20:12 keyring_file -- common/autotest_common.sh@952 -- # kill -0 916763 00:27:26.573 16:20:12 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:26.830 16:20:12 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:26.830 16:20:12 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 916763 00:27:26.830 16:20:12 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:26.830 16:20:12 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:26.830 16:20:12 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 916763' 00:27:26.830 killing process with pid 916763 00:27:26.830 16:20:12 keyring_file -- common/autotest_common.sh@967 -- # kill 916763 00:27:26.830 Received shutdown signal, test time was about 1.000000 seconds 00:27:26.830 00:27:26.830 Latency(us) 00:27:26.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.830 =================================================================================================================== 00:27:26.830 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:26.830 16:20:12 keyring_file -- common/autotest_common.sh@972 -- # wait 916763 00:27:27.087 16:20:12 keyring_file -- keyring/file.sh@21 -- # killprocess 915294 00:27:27.087 16:20:12 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 915294 ']' 00:27:27.087 16:20:12 keyring_file -- common/autotest_common.sh@952 -- # kill -0 915294 00:27:27.087 16:20:12 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:27.087 16:20:12 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:27.087 16:20:12 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 915294 00:27:27.087 16:20:12 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:27.087 16:20:12 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:27.087 16:20:12 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 915294' 00:27:27.087 killing process with pid 915294 00:27:27.087 16:20:12 keyring_file -- common/autotest_common.sh@967 -- # kill 915294 00:27:27.087 [2024-07-15 16:20:12.893213] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:27.087 16:20:12 keyring_file -- common/autotest_common.sh@972 -- # wait 915294 00:27:27.345 00:27:27.345 real 0m14.139s 00:27:27.345 user 0m35.280s 00:27:27.345 sys 0m3.266s 00:27:27.345 16:20:13 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:27.345 16:20:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:27.345 ************************************ 00:27:27.345 END TEST keyring_file 00:27:27.345 ************************************ 00:27:27.345 16:20:13 -- common/autotest_common.sh@1142 -- # return 0 00:27:27.345 16:20:13 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:27:27.345 16:20:13 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:27.345 16:20:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:27.345 16:20:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.602 16:20:13 -- common/autotest_common.sh@10 -- # set +x 00:27:27.602 ************************************ 00:27:27.602 START TEST keyring_linux 00:27:27.602 ************************************ 00:27:27.602 16:20:13 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:27.602 * Looking for test storage... 00:27:27.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:27.602 16:20:13 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.603 16:20:13 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.603 16:20:13 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.603 16:20:13 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.603 16:20:13 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.603 16:20:13 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.603 16:20:13 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.603 16:20:13 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:27.603 16:20:13 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:27.603 16:20:13 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:27.603 16:20:13 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:27.603 16:20:13 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:27.603 16:20:13 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:27.603 16:20:13 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:27.603 16:20:13 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:27.603 /tmp/:spdk-test:key0 00:27:27.603 16:20:13 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:27.603 16:20:13 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:27.603 16:20:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:27.603 /tmp/:spdk-test:key1 00:27:27.603 16:20:13 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=917132 00:27:27.603 16:20:13 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:27.603 16:20:13 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 917132 00:27:27.603 16:20:13 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 917132 ']' 00:27:27.603 16:20:13 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.603 16:20:13 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:27.603 16:20:13 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.603 16:20:13 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:27.603 16:20:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:27.603 [2024-07-15 16:20:13.569477] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:27:27.603 [2024-07-15 16:20:13.569556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid917132 ] 00:27:27.603 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.859 [2024-07-15 16:20:13.632510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.859 [2024-07-15 16:20:13.744138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.117 16:20:13 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:28.117 16:20:13 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:28.117 16:20:13 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:28.117 16:20:13 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.117 16:20:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:28.117 [2024-07-15 16:20:14.003357] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.117 null0 00:27:28.117 [2024-07-15 16:20:14.035394] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:28.117 [2024-07-15 16:20:14.035880] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:28.117 16:20:14 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.117 16:20:14 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:28.117 419988801 00:27:28.117 16:20:14 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:28.117 79044673 00:27:28.117 16:20:14 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=917261 00:27:28.117 16:20:14 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 917261 /var/tmp/bperf.sock 00:27:28.117 16:20:14 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:28.117 16:20:14 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 917261 ']' 00:27:28.117 16:20:14 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:28.117 16:20:14 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:28.117 16:20:14 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:28.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:28.117 16:20:14 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:28.117 16:20:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:28.117 [2024-07-15 16:20:14.106106] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:27:28.117 [2024-07-15 16:20:14.106188] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid917261 ] 00:27:28.374 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.374 [2024-07-15 16:20:14.164994] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.374 [2024-07-15 16:20:14.273607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.374 16:20:14 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:28.374 16:20:14 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:28.374 16:20:14 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:28.374 16:20:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:28.630 16:20:14 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:28.630 16:20:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:29.195 16:20:14 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:29.195 16:20:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:29.195 [2024-07-15 16:20:15.112231] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:29.195 nvme0n1 00:27:29.195 16:20:15 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:29.195 16:20:15 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:29.195 16:20:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:29.452 16:20:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:29.452 16:20:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:29.452 16:20:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:29.452 16:20:15 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:29.452 16:20:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:29.452 16:20:15 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:29.452 16:20:15 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:29.452 16:20:15 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:29.452 16:20:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:29.452 16:20:15 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:29.709 16:20:15 keyring_linux -- keyring/linux.sh@25 -- # sn=419988801 00:27:29.709 16:20:15 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:29.709 16:20:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:29.709 16:20:15 keyring_linux -- keyring/linux.sh@26 -- # [[ 419988801 == \4\1\9\9\8\8\8\0\1 ]] 00:27:29.709 16:20:15 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 419988801 00:27:29.709 16:20:15 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:29.709 16:20:15 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:29.966 Running I/O for 1 seconds... 00:27:30.897 00:27:30.897 Latency(us) 00:27:30.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.897 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:30.897 nvme0n1 : 1.01 10414.44 40.68 0.00 0.00 12209.11 4878.79 17379.18 00:27:30.897 =================================================================================================================== 00:27:30.897 Total : 10414.44 40.68 0.00 0.00 12209.11 4878.79 17379.18 00:27:30.897 0 00:27:30.897 16:20:16 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:30.897 16:20:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:31.154 16:20:17 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:31.154 16:20:17 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:31.154 16:20:17 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:31.154 16:20:17 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:31.154 16:20:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:31.154 16:20:17 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:31.412 16:20:17 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:31.412 16:20:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:31.412 16:20:17 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:31.412 16:20:17 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:31.412 16:20:17 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:27:31.412 16:20:17 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:31.412 16:20:17 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:31.412 16:20:17 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:31.412 16:20:17 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:31.412 16:20:17 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:31.412 16:20:17 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:31.412 16:20:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:31.669 [2024-07-15 16:20:17.579364] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:31.669 [2024-07-15 16:20:17.580056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ab3f0 (107): Transport endpoint is not connected 00:27:31.669 [2024-07-15 16:20:17.581049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ab3f0 (9): Bad file descriptor 00:27:31.669 [2024-07-15 16:20:17.582048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.669 [2024-07-15 16:20:17.582066] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:31.669 [2024-07-15 16:20:17.582078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.669 request: 00:27:31.669 { 00:27:31.669 "name": "nvme0", 00:27:31.669 "trtype": "tcp", 00:27:31.669 "traddr": "127.0.0.1", 00:27:31.669 "adrfam": "ipv4", 00:27:31.669 "trsvcid": "4420", 00:27:31.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:31.669 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:31.669 "prchk_reftag": false, 00:27:31.669 "prchk_guard": false, 00:27:31.669 "hdgst": false, 00:27:31.669 "ddgst": false, 00:27:31.669 "psk": ":spdk-test:key1", 00:27:31.669 "method": "bdev_nvme_attach_controller", 00:27:31.669 "req_id": 1 00:27:31.669 } 00:27:31.669 Got JSON-RPC error response 00:27:31.669 response: 00:27:31.669 { 00:27:31.669 "code": -5, 00:27:31.669 "message": "Input/output error" 00:27:31.669 } 00:27:31.669 16:20:17 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:27:31.669 16:20:17 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:31.669 16:20:17 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:31.669 16:20:17 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:31.669 16:20:17 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:31.669 16:20:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:31.669 16:20:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:31.669 16:20:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:31.669 16:20:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:31.669 16:20:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:31.669 16:20:17 keyring_linux -- keyring/linux.sh@33 -- # sn=419988801 00:27:31.669 16:20:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 419988801 00:27:31.669 1 links removed 00:27:31.669 16:20:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:31.669 16:20:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:31.669 16:20:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:31.669 16:20:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:31.669 16:20:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:31.669 16:20:17 keyring_linux -- keyring/linux.sh@33 -- # sn=79044673 00:27:31.669 16:20:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 79044673 00:27:31.669 1 links removed 00:27:31.669 16:20:17 keyring_linux -- keyring/linux.sh@41 -- # killprocess 917261 00:27:31.669 16:20:17 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 917261 ']' 00:27:31.669 16:20:17 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 917261 00:27:31.669 16:20:17 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:31.669 16:20:17 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:31.669 16:20:17 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 917261 00:27:31.669 16:20:17 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:31.669 16:20:17 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:31.669 16:20:17 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 917261' 00:27:31.669 killing process with pid 917261 00:27:31.669 16:20:17 keyring_linux -- common/autotest_common.sh@967 -- # kill 917261 00:27:31.669 Received shutdown signal, test time was about 1.000000 seconds 00:27:31.669 00:27:31.669 Latency(us) 00:27:31.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.670 =================================================================================================================== 00:27:31.670 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:31.670 16:20:17 keyring_linux -- common/autotest_common.sh@972 -- # wait 917261 00:27:31.926 16:20:17 keyring_linux -- keyring/linux.sh@42 -- # killprocess 917132 00:27:31.926 16:20:17 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 917132 ']' 00:27:31.926 16:20:17 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 917132 00:27:31.926 16:20:17 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:31.926 16:20:17 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:31.926 16:20:17 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 917132 00:27:31.926 16:20:17 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:31.926 16:20:17 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:31.926 16:20:17 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 917132' 00:27:31.926 killing process with pid 917132 00:27:31.926 16:20:17 keyring_linux -- common/autotest_common.sh@967 -- # kill 917132 00:27:31.926 16:20:17 keyring_linux -- common/autotest_common.sh@972 -- # wait 917132 00:27:32.491 00:27:32.491 real 0m4.900s 00:27:32.491 user 0m9.481s 00:27:32.491 sys 0m1.611s 00:27:32.491 16:20:18 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:32.491 16:20:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:32.491 ************************************ 00:27:32.491 END TEST keyring_linux 00:27:32.491 ************************************ 00:27:32.491 16:20:18 -- common/autotest_common.sh@1142 -- # return 0 00:27:32.491 16:20:18 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:32.491 16:20:18 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:32.491 16:20:18 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:32.491 16:20:18 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:32.491 16:20:18 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:32.491 16:20:18 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:32.492 16:20:18 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:32.492 16:20:18 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:32.492 16:20:18 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:32.492 16:20:18 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:32.492 16:20:18 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:32.492 16:20:18 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:32.492 16:20:18 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:32.492 16:20:18 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:32.492 16:20:18 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:32.492 16:20:18 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:32.492 16:20:18 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:32.492 16:20:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:32.492 16:20:18 -- common/autotest_common.sh@10 -- # set +x 00:27:32.492 16:20:18 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:32.492 16:20:18 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:32.492 16:20:18 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:32.492 16:20:18 -- common/autotest_common.sh@10 -- # set +x 00:27:34.392 INFO: APP EXITING 00:27:34.392 INFO: killing all VMs 00:27:34.392 INFO: killing vhost app 00:27:34.392 INFO: EXIT DONE 00:27:35.328 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:27:35.328 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:27:35.328 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:27:35.328 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:27:35.586 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:27:35.586 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:27:35.586 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:27:35.586 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:27:35.586 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:27:35.586 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:27:35.586 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:27:35.586 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:27:35.586 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:27:35.586 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:27:35.586 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:27:35.586 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:27:35.586 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:27:36.961 Cleaning 00:27:36.961 Removing: /var/run/dpdk/spdk0/config 00:27:36.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:36.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:36.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:36.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:36.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:36.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:36.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:36.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:36.961 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:36.961 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:36.961 Removing: /var/run/dpdk/spdk1/config 00:27:36.961 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:36.961 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:36.961 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:36.961 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:36.961 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:36.961 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:36.961 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:36.961 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:36.961 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:36.961 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:36.961 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:36.961 Removing: /var/run/dpdk/spdk2/config 00:27:36.961 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:36.961 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:36.961 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:36.961 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:36.961 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:36.961 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:36.961 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:36.961 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:36.961 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:36.961 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:36.961 Removing: /var/run/dpdk/spdk3/config 00:27:36.961 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:36.961 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:36.961 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:36.961 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:36.961 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:36.961 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:36.961 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:36.961 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:36.961 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:36.961 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:36.961 Removing: /var/run/dpdk/spdk4/config 00:27:36.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:36.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:36.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:36.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:36.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:36.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:36.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:36.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:36.961 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:36.961 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:36.961 Removing: /dev/shm/bdev_svc_trace.1 00:27:36.961 Removing: /dev/shm/nvmf_trace.0 00:27:36.961 Removing: /dev/shm/spdk_tgt_trace.pid659348 00:27:36.961 Removing: /var/run/dpdk/spdk0 00:27:36.961 Removing: /var/run/dpdk/spdk1 00:27:36.961 Removing: /var/run/dpdk/spdk2 00:27:36.961 Removing: /var/run/dpdk/spdk3 00:27:36.961 Removing: /var/run/dpdk/spdk4 00:27:36.961 Removing: /var/run/dpdk/spdk_pid657800 00:27:36.961 Removing: /var/run/dpdk/spdk_pid658535 00:27:36.961 Removing: /var/run/dpdk/spdk_pid659348 00:27:36.961 Removing: /var/run/dpdk/spdk_pid659783 00:27:36.961 Removing: /var/run/dpdk/spdk_pid660470 00:27:36.961 Removing: /var/run/dpdk/spdk_pid660612 00:27:36.961 Removing: /var/run/dpdk/spdk_pid661332 00:27:36.961 Removing: /var/run/dpdk/spdk_pid661337 00:27:36.961 Removing: /var/run/dpdk/spdk_pid661579 00:27:36.961 Removing: /var/run/dpdk/spdk_pid662892 00:27:36.961 Removing: /var/run/dpdk/spdk_pid663927 00:27:36.961 Removing: /var/run/dpdk/spdk_pid664693 00:27:36.961 Removing: /var/run/dpdk/spdk_pid664930 00:27:36.961 Removing: /var/run/dpdk/spdk_pid665132 00:27:36.961 Removing: /var/run/dpdk/spdk_pid665321 00:27:36.961 Removing: /var/run/dpdk/spdk_pid665484 00:27:36.961 Removing: /var/run/dpdk/spdk_pid665760 00:27:36.961 Removing: /var/run/dpdk/spdk_pid665941 00:27:36.961 Removing: /var/run/dpdk/spdk_pid666140 00:27:36.961 Removing: /var/run/dpdk/spdk_pid668582 00:27:36.961 Removing: /var/run/dpdk/spdk_pid668765 00:27:36.961 Removing: /var/run/dpdk/spdk_pid668926 00:27:36.961 Removing: /var/run/dpdk/spdk_pid668940 00:27:36.961 Removing: /var/run/dpdk/spdk_pid669368 00:27:36.961 Removing: /var/run/dpdk/spdk_pid669375 00:27:36.961 Removing: /var/run/dpdk/spdk_pid669797 00:27:36.961 Removing: /var/run/dpdk/spdk_pid669807 00:27:36.961 Removing: /var/run/dpdk/spdk_pid669980 00:27:36.961 Removing: /var/run/dpdk/spdk_pid670105 00:27:36.961 Removing: /var/run/dpdk/spdk_pid670269 00:27:36.961 Removing: /var/run/dpdk/spdk_pid670285 00:27:36.961 Removing: /var/run/dpdk/spdk_pid670770 00:27:37.219 Removing: /var/run/dpdk/spdk_pid670926 00:27:37.219 Removing: /var/run/dpdk/spdk_pid671119 00:27:37.219 Removing: /var/run/dpdk/spdk_pid671287 00:27:37.219 Removing: /var/run/dpdk/spdk_pid671324 00:27:37.219 Removing: /var/run/dpdk/spdk_pid671500 00:27:37.219 Removing: /var/run/dpdk/spdk_pid671659 00:27:37.219 Removing: /var/run/dpdk/spdk_pid671930 00:27:37.219 Removing: /var/run/dpdk/spdk_pid672093 00:27:37.219 Removing: /var/run/dpdk/spdk_pid672245 00:27:37.219 Removing: /var/run/dpdk/spdk_pid672467 00:27:37.219 Removing: /var/run/dpdk/spdk_pid672675 00:27:37.219 Removing: /var/run/dpdk/spdk_pid672839 00:27:37.219 Removing: /var/run/dpdk/spdk_pid673009 00:27:37.219 Removing: /var/run/dpdk/spdk_pid673263 00:27:37.219 Removing: /var/run/dpdk/spdk_pid673425 00:27:37.219 Removing: /var/run/dpdk/spdk_pid673587 00:27:37.219 Removing: /var/run/dpdk/spdk_pid673855 00:27:37.219 Removing: /var/run/dpdk/spdk_pid674014 00:27:37.219 Removing: /var/run/dpdk/spdk_pid674172 00:27:37.219 Removing: /var/run/dpdk/spdk_pid674444 00:27:37.219 Removing: /var/run/dpdk/spdk_pid674602 00:27:37.219 Removing: /var/run/dpdk/spdk_pid674764 00:27:37.219 Removing: /var/run/dpdk/spdk_pid675041 00:27:37.219 Removing: /var/run/dpdk/spdk_pid675197 00:27:37.219 Removing: /var/run/dpdk/spdk_pid675355 00:27:37.219 Removing: /var/run/dpdk/spdk_pid675541 00:27:37.219 Removing: /var/run/dpdk/spdk_pid675747 00:27:37.219 Removing: /var/run/dpdk/spdk_pid677800 00:27:37.219 Removing: /var/run/dpdk/spdk_pid704070 00:27:37.219 Removing: /var/run/dpdk/spdk_pid706624 00:27:37.219 Removing: /var/run/dpdk/spdk_pid713522 00:27:37.219 Removing: /var/run/dpdk/spdk_pid716702 00:27:37.219 Removing: /var/run/dpdk/spdk_pid719044 00:27:37.219 Removing: /var/run/dpdk/spdk_pid719453 00:27:37.219 Removing: /var/run/dpdk/spdk_pid723426 00:27:37.219 Removing: /var/run/dpdk/spdk_pid727144 00:27:37.219 Removing: /var/run/dpdk/spdk_pid727198 00:27:37.219 Removing: /var/run/dpdk/spdk_pid727808 00:27:37.219 Removing: /var/run/dpdk/spdk_pid728461 00:27:37.219 Removing: /var/run/dpdk/spdk_pid729006 00:27:37.219 Removing: /var/run/dpdk/spdk_pid729404 00:27:37.219 Removing: /var/run/dpdk/spdk_pid729528 00:27:37.219 Removing: /var/run/dpdk/spdk_pid729664 00:27:37.219 Removing: /var/run/dpdk/spdk_pid729801 00:27:37.219 Removing: /var/run/dpdk/spdk_pid729803 00:27:37.219 Removing: /var/run/dpdk/spdk_pid730466 00:27:37.219 Removing: /var/run/dpdk/spdk_pid731032 00:27:37.219 Removing: /var/run/dpdk/spdk_pid731778 00:27:37.219 Removing: /var/run/dpdk/spdk_pid732601 00:27:37.219 Removing: /var/run/dpdk/spdk_pid732686 00:27:37.219 Removing: /var/run/dpdk/spdk_pid732952 00:27:37.219 Removing: /var/run/dpdk/spdk_pid733834 00:27:37.219 Removing: /var/run/dpdk/spdk_pid734598 00:27:37.219 Removing: /var/run/dpdk/spdk_pid740032 00:27:37.219 Removing: /var/run/dpdk/spdk_pid740264 00:27:37.219 Removing: /var/run/dpdk/spdk_pid742817 00:27:37.219 Removing: /var/run/dpdk/spdk_pid746516 00:27:37.219 Removing: /var/run/dpdk/spdk_pid748692 00:27:37.219 Removing: /var/run/dpdk/spdk_pid754971 00:27:37.219 Removing: /var/run/dpdk/spdk_pid760169 00:27:37.219 Removing: /var/run/dpdk/spdk_pid761479 00:27:37.219 Removing: /var/run/dpdk/spdk_pid762147 00:27:37.219 Removing: /var/run/dpdk/spdk_pid772970 00:27:37.219 Removing: /var/run/dpdk/spdk_pid775057 00:27:37.219 Removing: /var/run/dpdk/spdk_pid799684 00:27:37.219 Removing: /var/run/dpdk/spdk_pid802553 00:27:37.219 Removing: /var/run/dpdk/spdk_pid803667 00:27:37.219 Removing: /var/run/dpdk/spdk_pid804981 00:27:37.219 Removing: /var/run/dpdk/spdk_pid805117 00:27:37.219 Removing: /var/run/dpdk/spdk_pid805258 00:27:37.219 Removing: /var/run/dpdk/spdk_pid805314 00:27:37.219 Removing: /var/run/dpdk/spdk_pid805710 00:27:37.219 Removing: /var/run/dpdk/spdk_pid807023 00:27:37.219 Removing: /var/run/dpdk/spdk_pid807739 00:27:37.219 Removing: /var/run/dpdk/spdk_pid808054 00:27:37.219 Removing: /var/run/dpdk/spdk_pid809671 00:27:37.219 Removing: /var/run/dpdk/spdk_pid810094 00:27:37.219 Removing: /var/run/dpdk/spdk_pid810539 00:27:37.219 Removing: /var/run/dpdk/spdk_pid813057 00:27:37.219 Removing: /var/run/dpdk/spdk_pid819081 00:27:37.219 Removing: /var/run/dpdk/spdk_pid821772 00:27:37.219 Removing: /var/run/dpdk/spdk_pid826240 00:27:37.219 Removing: /var/run/dpdk/spdk_pid827186 00:27:37.219 Removing: /var/run/dpdk/spdk_pid828272 00:27:37.219 Removing: /var/run/dpdk/spdk_pid830823 00:27:37.219 Removing: /var/run/dpdk/spdk_pid833175 00:27:37.219 Removing: /var/run/dpdk/spdk_pid837510 00:27:37.219 Removing: /var/run/dpdk/spdk_pid837517 00:27:37.219 Removing: /var/run/dpdk/spdk_pid840293 00:27:37.219 Removing: /var/run/dpdk/spdk_pid840429 00:27:37.219 Removing: /var/run/dpdk/spdk_pid840562 00:27:37.219 Removing: /var/run/dpdk/spdk_pid840861 00:27:37.219 Removing: /var/run/dpdk/spdk_pid840951 00:27:37.219 Removing: /var/run/dpdk/spdk_pid843578 00:27:37.219 Removing: /var/run/dpdk/spdk_pid843920 00:27:37.219 Removing: /var/run/dpdk/spdk_pid846660 00:27:37.219 Removing: /var/run/dpdk/spdk_pid848561 00:27:37.219 Removing: /var/run/dpdk/spdk_pid851979 00:27:37.220 Removing: /var/run/dpdk/spdk_pid855286 00:27:37.220 Removing: /var/run/dpdk/spdk_pid862153 00:27:37.220 Removing: /var/run/dpdk/spdk_pid866624 00:27:37.476 Removing: /var/run/dpdk/spdk_pid866627 00:27:37.476 Removing: /var/run/dpdk/spdk_pid878557 00:27:37.476 Removing: /var/run/dpdk/spdk_pid878961 00:27:37.476 Removing: /var/run/dpdk/spdk_pid879488 00:27:37.476 Removing: /var/run/dpdk/spdk_pid879903 00:27:37.476 Removing: /var/run/dpdk/spdk_pid880482 00:27:37.476 Removing: /var/run/dpdk/spdk_pid880891 00:27:37.476 Removing: /var/run/dpdk/spdk_pid881303 00:27:37.476 Removing: /var/run/dpdk/spdk_pid881824 00:27:37.476 Removing: /var/run/dpdk/spdk_pid884337 00:27:37.476 Removing: /var/run/dpdk/spdk_pid884497 00:27:37.476 Removing: /var/run/dpdk/spdk_pid888331 00:27:37.476 Removing: /var/run/dpdk/spdk_pid888571 00:27:37.476 Removing: /var/run/dpdk/spdk_pid890173 00:27:37.476 Removing: /var/run/dpdk/spdk_pid895713 00:27:37.476 Removing: /var/run/dpdk/spdk_pid895718 00:27:37.476 Removing: /var/run/dpdk/spdk_pid898611 00:27:37.476 Removing: /var/run/dpdk/spdk_pid900010 00:27:37.476 Removing: /var/run/dpdk/spdk_pid901431 00:27:37.476 Removing: /var/run/dpdk/spdk_pid902264 00:27:37.476 Removing: /var/run/dpdk/spdk_pid903672 00:27:37.476 Removing: /var/run/dpdk/spdk_pid904542 00:27:37.476 Removing: /var/run/dpdk/spdk_pid909872 00:27:37.476 Removing: /var/run/dpdk/spdk_pid910208 00:27:37.476 Removing: /var/run/dpdk/spdk_pid910600 00:27:37.476 Removing: /var/run/dpdk/spdk_pid912163 00:27:37.476 Removing: /var/run/dpdk/spdk_pid912555 00:27:37.476 Removing: /var/run/dpdk/spdk_pid912837 00:27:37.476 Removing: /var/run/dpdk/spdk_pid915294 00:27:37.476 Removing: /var/run/dpdk/spdk_pid915315 00:27:37.477 Removing: /var/run/dpdk/spdk_pid916763 00:27:37.477 Removing: /var/run/dpdk/spdk_pid917132 00:27:37.477 Removing: /var/run/dpdk/spdk_pid917261 00:27:37.477 Clean 00:27:37.477 16:20:23 -- common/autotest_common.sh@1451 -- # return 0 00:27:37.477 16:20:23 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:37.477 16:20:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:37.477 16:20:23 -- common/autotest_common.sh@10 -- # set +x 00:27:37.477 16:20:23 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:37.477 16:20:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:37.477 16:20:23 -- common/autotest_common.sh@10 -- # set +x 00:27:37.477 16:20:23 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:37.477 16:20:23 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:37.477 16:20:23 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:37.477 16:20:23 -- spdk/autotest.sh@391 -- # hash lcov 00:27:37.477 16:20:23 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:37.477 16:20:23 -- spdk/autotest.sh@393 -- # hostname 00:27:37.477 16:20:23 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:37.733 geninfo: WARNING: invalid characters removed from testname! 00:28:09.800 16:20:51 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:09.800 16:20:55 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:13.075 16:20:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:15.597 16:21:01 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:18.873 16:21:04 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:21.397 16:21:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:24.700 16:21:10 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:24.700 16:21:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:24.700 16:21:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:24.700 16:21:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.700 16:21:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.700 16:21:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.700 16:21:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.700 16:21:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.700 16:21:10 -- paths/export.sh@5 -- $ export PATH 00:28:24.700 16:21:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.700 16:21:10 -- common/autobuild_common.sh@472 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:24.700 16:21:10 -- common/autobuild_common.sh@473 -- $ date +%s 00:28:24.700 16:21:10 -- common/autobuild_common.sh@473 -- $ mktemp -dt spdk_1721053270.XXXXXX 00:28:24.700 16:21:10 -- common/autobuild_common.sh@473 -- $ SPDK_WORKSPACE=/tmp/spdk_1721053270.vjnv4P 00:28:24.700 16:21:10 -- common/autobuild_common.sh@475 -- $ [[ -n '' ]] 00:28:24.700 16:21:10 -- common/autobuild_common.sh@479 -- $ '[' -n '' ']' 00:28:24.700 16:21:10 -- common/autobuild_common.sh@482 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:24.700 16:21:10 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:24.700 16:21:10 -- common/autobuild_common.sh@488 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:24.700 16:21:10 -- common/autobuild_common.sh@489 -- $ get_config_params 00:28:24.700 16:21:10 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:24.700 16:21:10 -- common/autotest_common.sh@10 -- $ set +x 00:28:24.700 16:21:10 -- common/autobuild_common.sh@489 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:24.700 16:21:10 -- common/autobuild_common.sh@491 -- $ start_monitor_resources 00:28:24.700 16:21:10 -- pm/common@17 -- $ local monitor 00:28:24.700 16:21:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:24.700 16:21:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:24.700 16:21:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:24.700 16:21:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:24.700 16:21:10 -- pm/common@21 -- $ date +%s 00:28:24.700 16:21:10 -- pm/common@25 -- $ sleep 1 00:28:24.700 16:21:10 -- pm/common@21 -- $ date +%s 00:28:24.700 16:21:10 -- pm/common@21 -- $ date +%s 00:28:24.700 16:21:10 -- pm/common@21 -- $ date +%s 00:28:24.700 16:21:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721053270 00:28:24.700 16:21:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721053270 00:28:24.700 16:21:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721053270 00:28:24.700 16:21:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721053270 00:28:24.700 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721053270_collect-vmstat.pm.log 00:28:24.700 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721053270_collect-cpu-load.pm.log 00:28:24.700 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721053270_collect-cpu-temp.pm.log 00:28:24.700 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721053270_collect-bmc-pm.bmc.pm.log 00:28:25.639 16:21:11 -- common/autobuild_common.sh@492 -- $ trap stop_monitor_resources EXIT 00:28:25.639 16:21:11 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:28:25.639 16:21:11 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:25.639 16:21:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:25.639 16:21:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:25.639 16:21:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:25.639 16:21:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:25.639 16:21:11 -- pm/common@44 -- $ pid=927466 00:28:25.639 16:21:11 -- pm/common@50 -- $ kill -TERM 927466 00:28:25.639 16:21:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:25.639 16:21:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:25.639 16:21:11 -- pm/common@44 -- $ pid=927468 00:28:25.639 16:21:11 -- pm/common@50 -- $ kill -TERM 927468 00:28:25.639 16:21:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:25.639 16:21:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:25.639 16:21:11 -- pm/common@44 -- $ pid=927470 00:28:25.639 16:21:11 -- pm/common@50 -- $ kill -TERM 927470 00:28:25.639 16:21:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:25.639 16:21:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:25.639 16:21:11 -- pm/common@44 -- $ pid=927497 00:28:25.639 16:21:11 -- pm/common@50 -- $ sudo -E kill -TERM 927497 00:28:25.639 + [[ -n 573761 ]] 00:28:25.639 + sudo kill 573761 00:28:25.652 [Pipeline] } 00:28:25.672 [Pipeline] // stage 00:28:25.677 [Pipeline] } 00:28:25.695 [Pipeline] // timeout 00:28:25.700 [Pipeline] } 00:28:25.720 [Pipeline] // catchError 00:28:25.725 [Pipeline] } 00:28:25.744 [Pipeline] // wrap 00:28:25.750 [Pipeline] } 00:28:25.766 [Pipeline] // catchError 00:28:25.775 [Pipeline] stage 00:28:25.777 [Pipeline] { (Epilogue) 00:28:25.791 [Pipeline] catchError 00:28:25.793 [Pipeline] { 00:28:25.813 [Pipeline] echo 00:28:25.815 Cleanup processes 00:28:25.821 [Pipeline] sh 00:28:26.107 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:26.107 927600 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:26.107 927720 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:26.123 [Pipeline] sh 00:28:26.410 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:26.410 ++ grep -v 'sudo pgrep' 00:28:26.410 ++ awk '{print $1}' 00:28:26.410 + sudo kill -9 927600 00:28:26.425 [Pipeline] sh 00:28:26.794 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:34.908 [Pipeline] sh 00:28:35.194 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:35.194 Artifacts sizes are good 00:28:35.208 [Pipeline] archiveArtifacts 00:28:35.215 Archiving artifacts 00:28:35.435 [Pipeline] sh 00:28:35.718 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:35.733 [Pipeline] cleanWs 00:28:35.744 [WS-CLEANUP] Deleting project workspace... 00:28:35.744 [WS-CLEANUP] Deferred wipeout is used... 00:28:35.752 [WS-CLEANUP] done 00:28:35.754 [Pipeline] } 00:28:35.775 [Pipeline] // catchError 00:28:35.788 [Pipeline] sh 00:28:36.069 + logger -p user.info -t JENKINS-CI 00:28:36.081 [Pipeline] } 00:28:36.104 [Pipeline] // stage 00:28:36.109 [Pipeline] } 00:28:36.126 [Pipeline] // node 00:28:36.131 [Pipeline] End of Pipeline 00:28:36.153 Finished: SUCCESS